WO2019223536A1 - Display apparatus with intelligent user interface - Google Patents
Display apparatus with intelligent user interface Download PDFInfo
- Publication number
- WO2019223536A1 WO2019223536A1 PCT/CN2019/086009 CN2019086009W WO2019223536A1 WO 2019223536 A1 WO2019223536 A1 WO 2019223536A1 CN 2019086009 W CN2019086009 W CN 2019086009W WO 2019223536 A1 WO2019223536 A1 WO 2019223536A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scene
- display apparatus
- video content
- user
- command
- Prior art date
Links
- 238000000034 method Methods 0.000 claims description 72
- 238000001514 detection method Methods 0.000 description 31
- 238000010801 machine learning Methods 0.000 description 17
- 230000004044 response Effects 0.000 description 12
- 230000009471 action Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 2
- 230000009849 deactivation Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/42203—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Definitions
- This application generally relates to a display apparatus such as a television.
- this application describes a display apparatus with an intelligent user interface.
- the current breed of higher end televisions typically include network connectivity to facilitate streaming video content from content servers such as etc.
- the televisions utilize operating systems such as that facilitate execution of apps for other purposes.
- a display apparatus in a first aspect, includes a user input circuitry for receiving user commands and a display for outputting video content and a user interface.
- the video content includes metadata.
- the apparatus also includes a processor in communication with the user input circuitry and the display, and non-transitory computer readable media in communication with the processor that stores instruction code.
- the instruction code is executed by the processor and causes the processor to receive, from the user input circuitry, a first scene command to search for scenes in the video content of a scene type.
- the processor determines, from the metadata, one or more scenes in the video content related to the scene type.
- the processor then updates the user interface to depict one or more scene images related to the one or more scenes related to the scene type.
- a method for controlling a display apparatus includes receiving, via user input circuitry, user commands, and outputting, via a display, video content and a user interface.
- the video content includes metadata.
- the method includes receiving, from the user input circuitry, a first scene command to search for scenes in the video content of a scene type; determining, from the metadata, one or more scenes in the video content related to the scene type; and updating the user interface to depict one or more scene images related to the one or more scenes related to the scene type.
- a non-transitory computer readable media that stores instruction code for controlling a display apparatus.
- the instruction code is executable by a computer for causing the computer to receive, from user input circuitry, a first scene command to search for scenes in the video content of a scene type; determine, from metadata of video content, one or more scenes in the video content related to the scene type; and update a user interface to depict one or more scene images related to the one or more scenes related to the scene type.
- a display apparatus in a fourth aspect, includes a user input circuitry for receiving user commands; and a display for displaying video content and a user interface.
- the apparatus also includes a processor in communication with the user input circuitry, the display, and a search history database; and non-transitory computer readable media in communication with the processor that stores instruction code.
- the instruction code is executed by the processor and causes the processor to receive, from the user input circuitry, a first search command.
- the processor determines from the first search command one or more potential search commands related to the first search command.
- the processor updates the user interface to depict one or more of the potential search commands and receive, from the user input circuitry, a second search command that corresponds to one of the one or more potential search commands.
- the processor determines video content associated with the first and second search commands; and updates the user interface to depict one or more controls, each being associated with different video content of the determined video content.
- the instruction code causes the processor to update the user interface to depict unique identifiers over each of the one or more controls; receive, from the user input circuitry, a third search command that specifies one of the unique identifiers; and display video content associated with the specified unique identifier.
- the first and second search commands correspond to voice commands
- the instruction code causes the processor to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice commands.
- the instruction code causes the processor to update the search history database to reflect the fact that the second search command was selected to thereby increase a likelihood that the second search command will be predicted during a subsequent search.
- the instruction code causes the processor to predict the one or more potential search commands based at least in part on a history of search commands specified by the user stored in the search history database.
- the instruction code causes the processor to update the user interface to depict a phrase that corresponds to the first and second search commands, where the phrase is updated in real-time as the user specifies different search commands.
- a method for controlling a display apparatus includes receiving, via user input circuitry, user commands; displaying video content and a user interface; and receiving, from the user input circuitry, a first search command.
- the method further includes determining from the first search command one or more potential search commands related to the first search command; updating the user interface to depict one or more of the potential search commands; and receiving, from the user input circuitry, a second search command that corresponds to one of the one or more potential search commands.
- the method also includes determining video content associated with the first and second search commands; and updating the user interface to depict one or more controls, each being associated with different video content of the determined video content.
- the method further includes updating the user interface to depict unique identifiers over each of the one or more controls; receiving, from the user input circuitry, a third search command that specifies one of the unique identifiers; and displaying video content associated with the specified unique identifier.
- the first and second search commands correspond to voice commands
- the method further includes implementing a natural language processor; and determining, via the natural language processor, a meaning of the voice commands.
- the method further includes updating the search history database to reflect the fact that the second search command was selected to thereby increase a likelihood that the second search command will be predicted during a subsequent search.
- the method further includes predicting the one or more potential search commands based at least in part on a history of search commands specified by the user stored in the search history database.
- the method further includes updating the user interface to depict a phrase that corresponds to the first and second search commands, where the phrase is updated in real-time as the user specifies different search commands.
- a non-transitory computer readable media that stores instruction code for controlling a display apparatus.
- the instruction code is executable by a computer for causing the computer to receive, from user input circuitry of the computer, a first search command; determine from the first search command one or more potential search commands related to the first search command; update a user interface of the computer to depict one or more of the potential search commands; receive, from the user input circuitry, a second search command that corresponds to one of the one or more potential search commands; determine video content associated with the first and second search commands; and update the user interface to depict one or more controls, each being associated with different video content of the determined video content.
- the instruction code causes the computer to update the user interface to depict unique identifiers over each of the one or more controls; receive, from the user input circuitry, a third search command that specifies one of the unique identifiers; and display video content associated with the specified unique identifier.
- the first and second search commands correspond to voice commands
- the instruction code causes the computer to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice commands.
- the instruction code causes the computer to update the search history database to reflect the fact that the second search command was selected to thereby increase a likelihood that the second search command will be predicted during a subsequent search.
- the instruction code causes the computer to predict the one or more potential search commands based at least in part on a history of search commands specified by the user stored in the search history database.
- the instruction code causes the computer to update the user interface to depict a phrase that corresponds to the first and second search commands, where the phrase is updated in real-time as the user specifies different search commands.
- a display apparatus in a seventh aspect, includes user input circuitry for receiving user commands and a display for outputting video content and a user interface.
- the video content includes metadata.
- the apparatus also includes a processor in communication with the user input circuitry and the display, and non-transitory computer readable media in communication with the processor that stores instruction code.
- the instruction code is executed by the processor and causes the processor to receive, from the user input circuitry, a query regarding an image of the video content currently displayed on the display; determine one or more objects of the image associated with the query based on the metadata; update the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
- the instruction code causes the processor to determine one or more potential second queries related to the first query and the determined one or more objects; update the user interface to depict one or more of the one or more potential second queries; receive, from the user input circuitry, a second query that corresponds to one of the one or more potential second queries; determine one or more objects of the image associated with the first and the second query based on the metadata; update the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
- the instruction code causes the processor to update the user interface to depict unique identifiers over each of the one or more controls; receive, from the user input circuitry, a command that specifies one of the unique identifiers; and display information associated with the selection that is associated with the specified unique identifier.
- the query and the selection correspond to voice commands
- the instruction code causes the processor to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice commands.
- the metadata defines a hierarchy of queries.
- each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
- the instruction code causes the processor to update the user interface to depict a phrase that corresponds to the first and second queries, where the phrase is updated in real-time as the user specifies different queries.
- the video content continues to stream while the display depicts the one or more controls and the information related to the selection.
- a method for controlling a display apparatus includes receiving, via user input circuitry, user commands; and displaying video content and a user interface.
- the video content includes metadata.
- the method includes receiving, from the user input circuitry, a query regarding an image of the video content currently displayed; determining one or more objects of the image associated with the query based on the metadata; updating the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receiving a selection of one of the controls; and updating the user interface to depict information related to the selection.
- the method further includes determining one or more potential second queries related to the first query and the determined one or more objects; updating the user interface to depict one or more of the one or more potential second queries; receiving, from the user input circuitry, a second query that corresponds to one of the one or more potential second queries; determining one or more objects of the image associated with the first and the second query based on the metadata; updating the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receiving a selection of one of the controls; and updating the user interface to depict information related to the selection.
- the method further includes updating the user interface to depict unique identifiers over each of the one or more controls; receiving, from the user input circuitry, a command that specifies one of the unique identifiers; and displaying information associated with the selection that is associated with the specified unique identifier.
- the query and the selection correspond to voice commands
- the method further includes implementing a natural language processor; and determining, via the natural language processor, a meaning of the voice commands.
- the metadata defines a hierarchy of queries.
- each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
- the method further includes depicting a phrase that corresponds to the first and second queries, where the phrase is updated in real-time as the user specifies different queries.
- the video content continues to stream while the one or more controls and the information related to the selection are depicted.
- a non-transitory computer readable media that stores instruction code for controlling a display apparatus.
- the instruction code is executable by a computer for causing the computer to receive, from a user input circuitry of the computer, a query regarding an image of video content currently depicted on a display of the computer; determine one or more objects of the image associated with the query based on metadata; update a user interface of the computer to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
- the instruction code causes the computer to determine one or more potential second queries related to the first query and the determined one or more objects; update the user interface to depict one or more of the one or more potential second queries; receive, from the user input circuitry, a second query that corresponds to one of the one or more potential second queries; determine one or more objects of the image associated with the first and the second query based on the metadata; update the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
- the instruction code causes the computer to update the user interface to depict unique identifiers over each of the one or more controls; receive, from the user input circuitry, a command that specifies one of the unique identifiers; and display information associated with the selection that is associated with the specified unique identifier.
- the query and the selection correspond to voice commands
- the instruction code causes the computer to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice commands.
- a display apparatus in a tenth aspect, includes user input circuitry for receiving user commands and a display for outputting video content and a user interface.
- the video content includes metadata.
- the apparatus also includes a processor in communication with the user input circuitry and the display, and non-transitory computer readable media in communication with the processor that stores instruction code.
- the instruction code is executed by the processor and causes the processor to receive a pause command from a user to thereby pause the video content so that the display depicts a still image; subsequently determine one or more objects in the still image based on the metadata; update the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
- each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
- the controls include at least one of: an advertisement related to one of the objects, a share control to share the video content, a rating control to rate the video content, and an information control to display information related to one of the objects.
- the depicted information related to the selection includes a QR code associated with a URL to information related to the selection.
- a method for controlling a display apparatus includes receiving, via user input circuitry, user commands; and displaying video content and a user interface.
- the video content includes metadata.
- the method includes receiving a pause command from a user to thereby pause the video content so that a still image is depicted; subsequently determining one or more objects in the still image based on the metadata; updating the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receiving a selection of one of the controls; and updating the user interface to depict information related to the selection.
- each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
- the controls include at least one of: an advertisement related to one of the objects, a share control to share the video content, a rating control to rate the video content, and an information control to display information related to one of the objects.
- the depicted information related to the selection includes a QR code associated with a URL to information related to the selection.
- a non-transitory computer readable media that stores instruction code for controlling a display apparatus.
- the instruction code is executable by a computer for causing the computer to receive a pause command from a user to thereby pause video content so that a display of the computer depicts a still image; subsequently determine one or more objects in the still image based on metadata of the video content; update a user interface of the computer to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
- each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
- the controls include at least one of: an advertisement related to one of the objects, a share control to share the video content, a rating control to rate the video content, and an information control to display information related to one of the objects.
- the depicted information related to the selection includes a QR code associated with a URL to information related to the selection.
- a display apparatus includes presence detection circuitry for detecting an individual in proximity to the display apparatus; a display for displaying video content and a user interface; a processor in communication with the presence detection circuitry and the display; and non-transitory computer readable media in communication with the processor that stores instruction code, which when executed by the processor, causes the processor to determine, from the presence detection circuitry, whether a user is in proximity of the display apparatus; when the user is determined to not be in proximity of the display apparatus, cause the video content to pause; and when the user is determined to subsequently be in proximity of the display apparatus, cause the video content to resume.
- the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the instruction code causes the processor to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
- a plurality of users that includes a primary user are in proximity of the display apparatus, when the primary user is subsequently determined to not be in proximity of the display apparatus, the video content is paused and the user interface is updated to indicate that the video content is paused; and when the primary user is subsequently determined to be in proximity of the display apparatus the video content is resumed and the user interface is updated to indicate that the video content is resumed.
- the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the instruction code causes the processor to determine whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
- the processor updates the user interface on the display apparatus to indicate that the video content is paused; and when the video content is resumed, the processor updates the user interface on the display apparatus to indicate that the video content is resumed.
- the user interface when the user interface indicates that the video content is paused, the user interface is updated to depict information related to a content of the video.
- the information related to the content of the video includes advertising information related to the content.
- a method for controlling a display apparatus includes displaying video content and a user interface; determining, from presence detection circuitry configured to detect an individual in proximity to the display apparatus, whether a user is in proximity of the display apparatus; when the user is determined to not be in proximity of the display apparatus, pausing the video content; and when the user is determined to subsequently be in proximity of the display apparatus, resuming the video content.
- the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the method further includes periodically causing the imager to capture an image; analyzing the captured image to identify face data; and comparing the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
- a plurality of users that includes a primary user are in proximity of the display apparatus, and the method further includes when the primary user is subsequently determined to not be in proximity of the display apparatus, pausing the video content and updating the user interface to indicate that the video content is paused; and when the primary user is subsequently determined to be in proximity of the display apparatus, resuming the video content and updating the user interface to indicate that the video content is resumed.
- the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the method further includes determining whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
- the method further includes when the video content is paused, updating the user interface on the display apparatus to indicate that the video content is paused; and when the video content is resumed, updating the user interface on the display apparatus to indicate that the video content is resumed.
- the method includes updating the user interface to depict information related to a content of the video.
- the information related to the content of the video includes advertising information related to the content.
- a non-transitory computer readable media that stores instruction code for controlling a display apparatus.
- the instruction code is executable by a computer for causing the computer to determine, via presence detection circuitry of the computer that is configured to detect an individual in proximity to the display apparatus, whether a user is in proximity of the display apparatus; when the user is determined to not be in proximity of the display apparatus, pause the video content; and when the user is determined to subsequently be in proximity of the display apparatus, resume the video content.
- the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the instruction code causes the computer to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
- a plurality of users that includes a primary user are in proximity of the display apparatus, when the primary user is subsequently determined to not be in proximity of the display apparatus, the instruction code causes the computer to pause the video content and update the user interface to indicate that the video content is paused; and when the primary user is subsequently determined to be in proximity of the display apparatus, the instruction code causes the computer to resume the video content and update the user interface to indicate that the video content is resumed.
- the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the instruction code causes the computer to determine whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
- the instruction code when the video content is paused, causes the computer to update the user interface on the display apparatus to indicate that the video content is paused; and when the video content is resumed, the instruction code causes the computer to update the user interface on the display apparatus to indicate that the video content is resumed.
- the instruction code causes the computer to update the user interface to depict information related to a content of the video.
- a display apparatus includes presence detection circuitry for detecting an individual in proximity to the display apparatus; a display for displaying video content and a user interface; a processor in communication with user input circuitry, the display, and a search history database; and non-transitory computer readable media in communication with the processor that stores instruction code, which when executed by the processor, causes the processor to a) determine, from the presence detection circuitry, a user in proximity of the display apparatus; b) determine one or more program types associated with the user; c) determine available programs that match the determined one or more program types; and d) update the user interface to depict a listing of one or more of the available programs that match the determined one or more program types.
- the instruction code causes the processor to receive a power on command from the user to cause the display apparatus to enter a viewing state; and perform operations a) -d) described in the above aspect after receiving the power on command, but before receiving any subsequent commands from the user.
- the instruction code causes the processor to determine, from the presence detection circuitry, a plurality of users in proximity of the display apparatus; predict one or more program types associated with the plurality of users based on a history of program types previously viewed by the plurality of users stored in the search history database; determine from the predicted one or more program types, common program types common to the each of the plurality of users; determine available programs that match the common program types; and update the user interface to depict a listing of one or more of the available programs that match the common program types.
- the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the instruction code causes the processor to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
- the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the instruction code causes the processor to determine whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
- the display apparatus further includes the user input circuitry for receiving user commands, and the instruction code causes the processor to receive a command to select one of the available programs; and cause video content associated with the selected available program to be displayed on the display.
- the command corresponds to a voice command
- the instruction code causes the processor to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice command.
- determination of one or more program types associated with the user is based on a history of program types previously viewed by the user stored in the search history database in communication with the display apparatus.
- the instruction code causes the processor to receive a power off command from a user to thereby cause the display apparatus to enter a lower power state and to deactivate the display; perform operations a) -d) described in the above aspect after receiving the power off command, but before receiving any subsequent commands from the user; and deactivate the display after a predetermined time when no user indication to power on the display apparatus is detected.
- the instruction code causes the processor to predict one or more information types associated with the user; and update the user interface to depict information associated with the predicted one or more information types.
- a method for controlling a display apparatus includes a) providing presence detection circuitry for detecting an individual in proximity to the display apparatus; b) displaying video content and a user interface; c) determining, from the presence detection circuitry, a user in proximity of the display apparatus; d) determining one or more program types associated with the user; e) determining available programs that match the determined one or more program types; and f) updating the user interface to depict a listing of one or more of the available programs that match the determined one or more program types.
- the method further includes receiving a power on command from the user to cause the display apparatus to enter a viewing state; and performing operations c) -f) after receiving the power on command, but before receiving any subsequent commands from the user.
- the method further includes determining, from the presence detection circuitry, a plurality of users in proximity of the display apparatus; predicting one or more program types associated with the plurality of users based on a history of program types previously viewed by the plurality of users stored in a search history database; determining from the predicted one or more program types, common program types common to the each of the plurality of users; determining available programs that match the common program types; and updating the user interface to depict a listing of one or more of the available programs that match the common program types.
- the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the method further includes periodically causing the imager to capture an image; analyzing the captured image to identify face data; and comparing the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
- the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the method further includes determining whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
- the method further includes receiving, via user input circuitry, user commands, receiving a command to select one of the available programs; and causing video content associated with the selected available program to be displayed on the display apparatus.
- the command corresponds to a voice command
- the method further includes implementing a natural language processor; and determining, via the natural language processor, a meaning of the voice command.
- determination of the one or more program types associated with the user is based on a history of program types previously viewed by the user stored in the search history database in communication with the display apparatus.
- the method further includes receiving a power off command from the user to thereby cause the display apparatus to enter a lower power state and to deactivate a display; performing operations c) -f) after receiving the power off command, but before receiving any subsequent commands from the user; and deactivate the display after a predetermined time when no user indication to power on the display apparatus is detected.
- the method further includes predict one or more information types associated with the user; and update the user interface to depict information associated with the predicted one or more information types.
- a display apparatus includes a display for displaying video content and a user interface; a processor in communication with presence detection circuitry and the display; and non-transitory computer readable media in communication with the processor that stores instruction code.
- the instruction code is executed by the processor and causes the processor to receive data that relates a smart appliance state to display apparatus usage.
- the processor also determines current display apparatus usage; and determines a proposed smart appliance state corresponding to the current display apparatus usage based on the received data.
- the processor adjusts the smart appliance to the determined state.
- the smart appliance state defines an activation state of the smart appliance
- the display apparatus usage defines one or more of: a time of usage of the display apparatus, a program type viewed on the display apparatus, and a specific user of the display apparatus.
- the display apparatus includes the presence detection circuitry for detecting a specific user in proximity to the display apparatus, and the presence detection circuitry includes an imager for capturing images in front of the display apparatus, where the instruction code causes the processor to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with a plurality of users to determine whether the specific user is in proximity of the display apparatus.
- the display apparatus includes communication circuitry for receiving new state information from smart appliances, and a database for storing the new state information of the smart appliances and information that defines new display apparatus usage of the display apparatus, where the instruction code causes the processor to continuously update the database with the new state information of the smart appliances and the new display apparatus usage information of the display apparatus; and correlate the new state information of the smart appliances and the new display apparatus usage information associated with the display apparatus to form the relation between the smart appliances state and the display apparatus usage.
- a method for controlling a display apparatus includes displaying video content and a user interface and receiving data that relates a smart appliance state to display apparatus usage. The method also includes determining current display apparatus usage and determining a proposed smart appliance state corresponding to the current display apparatus usage based on the received data. The method also includes adjusting the smart appliance to the determined state.
- the smart appliance state defines an activation state of the smart appliance
- the display apparatus usage defines one or more of: a time of usage of the display apparatus, a program type viewed on the display apparatus, and a specific user of the display apparatus.
- the display apparatus includes presence detection circuitry for detecting a specific user in proximity to the display apparatus, and the presence detection circuitry includes an imager for capturing images in front of the display apparatus, where the method further includes periodically causing the imager to capture an image; analyzing the captured image to identify face data; and comparing the face data with face data associated with a plurality of users to determine whether the specific user is in proximity of the display apparatus.
- the display apparatus includes communication circuitry for receiving new state information from smart appliances, and a database for storing the new state information of the smart appliances and information that defines new display apparatus usage of the display apparatus, where the method further includes continuously updating the database with the new state information of the smart appliances and the new display apparatus usage information of the display apparatus; and correlating the new state information of the smart appliances and the new display apparatus usage information associated with the display apparatus to form the relation between the smart appliances state and the display apparatus usage.
- a non-transitory computer readable media that stores instruction code for controlling a display apparatus.
- the instruction code is executable by a computer for causing the computer to receive data that relates a smart appliance state to display apparatus usage; determine current display apparatus usage; determine a proposed smart appliance state corresponding to the current display apparatus usage based on the received data; and adjust the smart appliance to the determined state.
- the smart appliance state defines an activation state of the smart appliance
- the display apparatus usage defines one or more of: a time of usage of the display apparatus, a program type viewed on the display apparatus, and a specific user of the display apparatus.
- the display apparatus includes presence detection circuitry for detecting a specific user in proximity to the display apparatus, and the presence detection circuitry includes an imager for capturing images in front of the display apparatus, where the instruction code causes the computer to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with a plurality of users to determine whether the specific user is in proximity of the display apparatus.
- the display apparatus includes communication circuitry for receiving new state information from smart appliances, and a database for storing the new state information of the smart appliances and information that defines new display apparatus usage of the display apparatus, where the instruction code causes the computer to continuously update the database with the new state information of the smart appliances and the new display apparatus usage information of the display apparatus; and correlate the new state information of the smart appliances and the new display apparatus usage information associated with the display apparatus to form the relation between the smart appliances state and the display apparatus usage.
- Fig. 1 illustrates an exemplary environment in which a display apparatus operates
- Fig. 2 illustrates exemplary operations for enhancing navigation of video content.
- Figs. 3A-3C illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 2;
- Fig. 4 illustrates exemplary operations that facilitate locating a particular type of video content
- Fig. 5 illustrates an exemplary user interface that may be presented to a user during the operations of Fig 4;
- Fig. 6 illustrates exemplary operations for determining information related to images in video content.
- Figs. 7A and 7B illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 6;
- Fig. 8 illustrates alternative exemplary operations for determining information related to images in video content
- Figs. 9A and 9B illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 8;
- Fig. 10 illustrates alternative exemplary operations for automatically pausing video content
- Figs. 11A and 11B illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 10;
- Fig. 12 illustrates alternative exemplary operations for automatically pausing video content
- Figs. 13A-13D illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 12;
- Fig. 14 illustrates exemplary operations for adjusting various smart appliances based on a detected routine of a user
- Figs. 15A-15B illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 14;
- Fig. 16 illustrates an exemplary computer system that may form part of or implement the systems described in the figures or in the following paragraphs.
- the embodiments described below are directed to various user interface implementations that facilitate access to television features in an intelligent, easy to use manner.
- the user interfaces rely on various machine learning techniques that facilitate access to these features and other information with a minimum number of steps.
- the user interfaces are configured to be intuitive, with minimal learning time required to become proficient in navigating the user interfaces.
- Fig. 1 illustrates an exemplary environment in which a display apparatus operates. Illustrated are, the display apparatus 100, a group of mobile devices 105, a GPS network 110, a computer network 115, a group of social media servers 120, a group of content servers 125, a support server 127, and one or more users that may view and/or interact with the display apparatus 100.
- the display apparatus 100, social media servers 120, content servers 125, and support server 127 may communicate with one another via a network 107 such as the Internet, a cable network, a satellite network, etc.
- the social media servers 120 correspond generally to computer systems hosting publicly available information that may be related to the users 130 of the display apparatus 100.
- the social media servers 120 may be etc.
- the social media servers 120 may include blogs, forums, and/or any other systems or websites from which information related to the users 130 may be obtained.
- the mobile devices 105 may correspond to mobile phones, tablets, etc. carried by one or more of the users 130.
- the mobile devices 105 may include short range communication circuitry that facilitates direct communication with the display apparatus 100.
- the mobile devices 105 may include Bluetooth circuitry, nearfield comminution circuitry, etc.
- the communication circuitry facilities detection of a given mobile device 105 when it is in the proximity of display apparatus 100. This in turn may facilitate determination, by the display apparatus 100, of the presence of a user 130 within viewing distance of the display apparatus 100.
- the GPS network 110 and computer network 115 may communicate information to the display apparatus 100 that may in turn facilitate determination, by the display apparatus 100, of the general location of display apparatus 100.
- the GPS network 110 may communicate information that facilitates determining a relatively precise location of the display apparatus 100.
- the computer network 115 may assign an IP address to the display apparatus 100 that may be associated with a general location, such as a city or other geographic region.
- the content servers 125 correspond generally to computer systems hosting video content.
- the content servers 125 may correspond to head-end equipment operated by a cable television provider, network provider, etc.
- the content servers 125 may in some cases store video content such as movies, television shows, sports programs, etc.
- video content may include metadata that defines various aspects of the video content.
- metadata associated with a sports matchup may include information timestamps, still images, etc. related to various events of the match, such as goals, penalties, etc.
- the metadata may include information associated with different individuals depicted in the video content such as the names of players, coaches, etc.
- the metadata in the video content may include information that facilitates determining whether the video content is of a particular type (e.g., comedy, drama, sports, adventure, etc. ) .
- the metadata may include information associated with different individuals depicted in the video content such as the names of actors shown in the video content.
- the metadata may include information associated with different objects depicted in the video content such as garments worn by individuals, personal items carried by the individuals, and various objects that may be depicted in the video content.
- the metadata may have been automatically generated beforehand by various machine learning techniques for identifying individuals, scenes, events, etc. in the video content.
- the machine learning techniques may use some form of human assistance in making this determination.
- the support server 127 corresponds generally to computer system configured to provide advanced services to the display apparatus 100.
- support server 127 may correspond to high-end computer that configured to perform various machine learning technique for determining the meaning of voice commands, predicting responses to the voice commands, etc.
- the support server 127 may receive voice commands and other types of commands from the display apparatus 100 and communicate responses associated with the commands back to the display apparatus.
- the display apparatus 100 may correspond to a television or other viewing device with enhanced user interface capabilities.
- the display apparatus 100 may include a CPU 150, a video processor 160, an I/O interface 155, an AI processor 165, a display 175, a support database 153, and instruction memory 170.
- the CPU 150 may correspond to processor such as an etc. based processor.
- the CPU 150 may execute an operating system, such as Linux or other operating system suitable for execution within a display apparatus.
- Instructions code associated with the operating system and for controlling various aspects of the display apparatus 100 may be stored within the instruction memory 170.
- instruction code stored in the instruction memory 170 may facilitate controlling the CPU 150 to communicate information to and from the I/O interface 155.
- the CPU 150 may process video content received from the I/O interface 155 and communicate the processed video content to the display 175.
- the CPU 150 may generate various user interfaces that facilitate controlling different aspects of the display apparatus.
- the I/O interface 155 is configured to interface with various types of hardware and to communicate information received from the hardware to the CPU.
- the I/O interface 155 may be coupled to one or more antenna’s that facilitate receiving information from the mobile terminals 105, GPS network 110, computer network 115, smart appliances 117, etc.
- the I/O interface may be coupled to an imager 151 arranged on the face of the display apparatus 100 to facilitate capturing images of individuals near the display apparatus.
- the I/O interface may be coupled to one or more microphones 152 arranged on the display apparatus 100 to facilitate capturing voice instructions that may be conveyed by the users 130.
- the AI processor 165 may correspond to a processor specifically configured to perform AI operations such as natural language processing, still and motion image processing, voice processing, etc.
- the AI processor 165 may be configured to perform voice recognition to recognize voice commands received through the microphone.
- the AI processor 165 may include face recognition functionality to identify individuals in images captured by the imager.
- the AI processor 165 may be configured to analyze content communicated from one or more content servers to identify objects within the content.
- Exemplary operations performed by the CPU 150 and/or other modules of the display apparatus 100 in providing an intelligent user interface are illustrated below.
- the operations may be implemented via instruction code stored in non-transitory computer readable media 170 that resides within the subsystems configured to cause the respective subsystems to perform the operations illustrated in the figures and discussed herein.
- Fig. 2 illustrates exemplary operations for enhancing navigation of video content. The operations of Fig. 2 are better understood with reference to Figs. 3A-3C.
- the display apparatus 100 may be depicting video content, such as a soccer match, as illustrated in Fig. 3A.
- the user 130 may then issue a first scene command 305 to the display apparatus 100 to have the display apparatus 100 search for scenes in the video content.
- the user 130 may simply speak out loud, “show me all the goals. ”
- the natural language processor implemented by the CPU 150 alone or in cooperation with the AI processor 165 may determine the meaning of the voice command.
- data associated with the voice command may be communicated to the support server 127 which may then ascertain the meaning of the voice command and convey the determined meaning back to the display apparatus.
- the user interface 300 may include a phrase control 310 that is updated in real-time to depict text associated with the commands issued by the user.
- the display apparatus 100 may determine scenes in the video content that are related to a type of scene associated with the first scene command 305.
- the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques that utilize metadata associated with the video content to determine scenes in the video content that are related to the scene type.
- the first scene command 305 may be communicated to the support server 127 and the support server 127 may determine and convey the scene type to the display apparatus 100.
- the user interface 300 of the display apparatus 100 may be updated to depict scene images 320 associated with the determined scenes.
- images 320 from the video content metadata associated with the scenes may be displayed on the user interface 300.
- the images 320 may correspond to still images and/or a sequence of images or video associated with the scene.
- the user interface 300 may be updated to display unique identifiers 325 on or near each image.
- the unique identifiers 325 are superimposed on part of each image so that the unique identifiers 325 are clearly visible.
- the user 130 may specify a second scene command that specifies one of the unique identifiers 325.
- the user 130 may specify “one” to select the scene associated with the first image 320.
- the unique identifiers correspond to the associated scenes.
- the unique identifiers take the form of identifiers such as Arabic numerals as shown in Fig. 3A that are easy for the user to say and facilitate the display apparatus 100 itself or servers to recognize.
- the user may say an Arabic numeral (for example, say 1) as the second scene command.
- the user may issue the second scene command by pressing a corresponding button (for example, 1) on a controlling device (for example, remote control of the display apparatus) .
- video content associated with the specified unique identifier 325 may be displayed on the user interface 300, as illustrated in Fig. 3C.
- the user 130 may refine a scene command by specifying additional information. For example, in response to receiving the first scene command 305 at block 200, at block 225 one or more potential scene commands 315 related to the first scene command 305 may be determined.
- the machine learning techniques implemented by the CPU 150, AI processor 165, and/or the support server 127 may be utilized to determine the potential scene commands related to the first scene command 305.
- the metadata in the video content may define a hierarchy of scene commands utilized by the machine learning techniques in determining potential scene commands related to a given first scene command 305.
- the user interface 300 may be updated to depict one or more of the potential scene commands 315, as illustrated in Fig. 3A. For example, in response to the first scene command 305 “show me all the goals, ” the potential scene commands “in the first half” , “by Real Madrid” , etc. may be determined and depicted.
- the user 130 may issue one of the potential scene commands 315 to instruct the display apparatus 100 to search for scenes in the video content, as illustrated in Fig. 3B.
- the user 130 may simply speak out loud, “in the first half. ”
- the phrase control 310 may be updated in real-time to depict text associated with the first scene command 305 and a third scene command 330.
- the operations may repeat from block 205.
- the display apparatus 100 may determine scenes in the video content that are related to a type of scene associated with the first scene command 305 and the third scene command 330.
- the first scene commands 305 and the third scene command 330 may be conveyed to the support server 127 and the support server 127 may convey information that defines related scenes to the display apparatus.
- scene commands beyond the first and third scene commands may be specified to facilitate narrowing down desired content.
- additional scene commands beyond the first and third scene commands may be specified to facilitate narrowing down desired content.
- another group of potential scene commands 315 may be depicted, and so on.
- Fig. 4 illustrates exemplary operations that facilitate locating a particular type of video content. The operations of Fig. 4 are better understood with reference to Fig. 5.
- the display apparatus 100 may be depicting video content, such as a sitcom, as illustrated in Fig. 5.
- the user 130 may issue a first search command 505 to the display apparatus 100 to have the display apparatus 100 search for a particular type of video content.
- the user 130 may simply speak out loud, “show. ”
- the natural language processor implemented by the CPU 150 alone or in cooperation with the AI processor 165 may determine the meaning of the voice command.
- data associated with the voice command may be communicated to the support server 127 which may then ascertain the meaning of the voice command and convey the determined meaning back to the display apparatus.
- the display apparatus 100 may determine video content that is related to the first search command 505.
- the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques that utilize metadata associated with the video content to determine video content that is related to the search command.
- the first search command 505 may be communicated to the support server 127 and the support server 127 may determine and convey information related to the video content that is in turn related to the first search command to the display apparatus 100.
- the user interface 500 may be updated to depict controls 520 that facilitate selecting video content.
- Each control may include a unique identifier 525 on or near the control 520 that facilitates selecting the control by voice.
- a first control with the unique identifier “one” may correspond to an image that represents an input source of the display apparatus 100 that facilitates selecting video content from the input source.
- a second control with the unique identifier “two” may correspond to an image of an actor that, when selected, facilitates selecting video content that includes the actor.
- a fourth control with the unique identifier “four” may correspond to a scene from a movie that the user frequently watches or that is associated with types of shows the user 130 watches.
- the machine learning techniques may determine the type of control to display based at least in part on a history of search commands and selections specified by the user that may be stored in the support database 153 of the display apparatus 100 or maintained within the support server 127.
- the support database 153 is dynamically updated to reflect the user’s choices to improve the relevancy of the controls displayed to the user for subsequent request.
- the user 130 may specify a second search command that specifies one of the unique identifiers. For example, the user 130 may specify “four” to select the scene associated with the fourth image 520.
- video content associated with the specified unique identifier (e.g., “four” ) may be depicted on the user interface 500 of the display apparatus 100.
- the user 130 may refine a search command by specifying additional information. For example, in response to receiving the first search command at block 400, at block 425, one or more potential third search commands 515 related to the first search command 505 may be determined.
- the machine learning techniques implemented by the CPU 150, AI processor 165, and/or the support server 127 may be utilized to determine the potential commands related to the first search command 505.
- the metadata in the video content may include information that facilitates determining whether the video content is associated with a particular type of video content (e.g., comedy, drama, sports, etc. ) . This metadata may be utilized by the machine learning techniques in determining potential third search commands related to a given first search command.
- the user interface 500 may be updated to depict one or more of the potential search commands 515, as illustrated in Fig. 5. For example, in response to the first scene command “show, ” the potential search commands 515 “games” , “action movies” , etc. may be determined and displayed.
- the user interface 500 may include a phrase control 510 that is updated in real-time to depict text associated with the commands issued by the user.
- the user 130 may issue one of the potential search commands 515 to instruct the display apparatus 100 to search for various types of video content. For example, the user 130 may simply speak out loud, “action movies. ”
- the phrase control 510 may be updated in real-time to depict text associated with the first search command 505 and the third search command 515 (e.g., “show action movies” ) .
- the operations may repeat from block 405.
- the display apparatus 100 may determine video content that is related to the first and third search commands and display appropriate controls for selection by the user.
- Fig. 6 illustrates exemplary operations for determining information related to images in video content. The operations of Fig. 6 are better understood with reference to Figs. 7A and 7B.
- the display apparatus 100 may be depicting video content, such as a movie, as illustrated in Fig. 7A.
- the user 130 may issue a first query 705 to the display apparatus 100 to have the display apparatus 100 provide information related to the query. For example, the user 130 may simply speak out loud, “who is on screen. ”
- the natural language processor implemented by the CPU 150 and/or AI processor 165 may determine the meaning of the voice command.
- data associated with the voice command may be communicated to the support server 127 which may then ascertain the meaning of the voice command and convey the determined meaning back to the display apparatus 100.
- the display apparatus 100 may determine one or more objects of the image associated with the query 705.
- the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques that utilize metadata associated with the video content to determine different objects being depicted on the user interface 700 of the display apparatus 100.
- the first query 705 may be communicated to the support server 127 and the support server 127 may determine and convey information related to different objects depicted on the user interface 700 to the display apparatus 100.
- the user interface 700 of the display apparatus 100 may be updated to depict controls 720 that facilitate selecting different objects.
- Each control may include a unique identifier 725 on or near each control 720 that facilitates selecting the control by voice.
- controls for each actor may be depicted on the user interface 700.
- the user 130 may select one of the unique identifiers 725. For example, the user 130 may specify “two” to select a particular actor.
- the user interface 700 may be updated to depict information related to the selection. For example, as illustrated in Fig. 7B, an informational control 730 with information related to the selected actor may be provided.
- the user 130 may refine a the query by specifying additional information. For example, in response to receiving the first query at block 600, at block 625, one or more potential second queries 715 related to the first query 705 may be determined.
- the machine learning techniques implemented by the CPU 150 and/or the support server 127 may be utilized to determine the potential queries related to the first query 705.
- Metadata in the video content may be utilized by the machine learning techniques in determining potential queries related to a given first query.
- the user interface 700 may be updated to depict one or more of the potential queries 715, as illustrated in Fig. 7A. For example, in response to the first query “who is on screen, ” the potential queries “other movies by john doe” , “where was it filmed” , etc. may be determined and depicted.
- the user interface 700 may include a phrase control 710 that is updated in real-time to depict text associated with the queries issued by the user.
- the user 130 may indicate a second query that corresponds to one of the potential queries 715 to instruct the display apparatus 100 to depict information related to the query.
- the phrase control 710 may be updated in real-time to depict text associated with the first query 705 and the second query.
- objects related to the second query may be determined and included with or may replace the objects previously determined. Then the operations may repeat from block 605.
- Fig. 8 illustrates alternative exemplary operations for determining information related to images in video content. The operations of Fig. 8 are better understood with reference to Figs. 9A and 9B.
- the display apparatus 100 may be depicting video content, such as a sitcom, as illustrated in Fig. 9A.
- the user 130 may issue a command to the display apparatus 100 to pause the video content so that a still image is depicted on the user interface 900.
- the display apparatus 100 may determine one or more objects of the image.
- the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques that utilize metadata associated with the video content to determine different objects being depicted in the still image.
- the still image may be communicated to the support server 127 and the support server 127 may determine and convey different objects being depicted in the still image to the display apparatus 100.
- the user interface of the display apparatus 100 may be updated to depict controls 920 that facilitate selecting different objects, as illustrated in Fig. 9A.
- controls 920 may be provided for selecting an advertisement related to one of the objects in the still image, to share the video content, to rate the video content, to display information related to one of the objects.
- Controls 920 for other aspects may be provided.
- Each control 920 may include a unique identifier on or near the control 920 that facilitates selecting the control by voice.
- the user 130 may select one of the unique identifiers. For example, the user 130 may specify the unique identifier associated with a control depicting a handbag that corresponds to a handbag shown in the still image.
- the user interface 900 may be updated to depict information related to the selection.
- an informational control 925 with information related to the selection may be provided.
- the informational control 925 may depict a QR code associated with a URL that may be utilized to find out more information related to the selection.
- the QR code facilitates navigation to the URL by scanning the QR code with an appropriate application on, for example, a mobile device.
- Fig. 10 illustrates alternative exemplary operations for automatically pausing video content. The operations of Fig. 10 are better understood with reference to Figs. 11A and 11B.
- the display apparatus 100 may determine whether a user is in proximity of the display apparatus 100.
- the imager 151 of the display apparatus 100 may capture images in front of the display apparatus.
- the CPU 150 alone or in cooperation with the AI processor 165 may control the imager 151 to capture an image, analyze the captured image to identify face data in the image, and compare the face data with face data associated with the user 130 to determine whether the user 130 is in proximity of the display apparatus.
- face data associated with the user 130 may have been previously captured by the display apparatus 100 during, for example, an initial setup routine.
- the face data may have been stored to the support database 153.
- near field communication circuitry of the display apparatus 100 may be utilized to detect the presence of a device in proximity to the display apparatus, carried by a user 130, that has near field communication capabilities.
- the device may have been previous registered with the display apparatus 100 as belonging to a particular user. Registration information may be stored to the support database 153.
- a status control 1105 may be depicted on the user interface 1100 to indicate that the video content has been paused.
- the user interface 1100 may depict additional details related to a still image depicted on the user interface 1100 such as the information described above in relation to Figs. 9A and 9B.
- the video content may be resumed, as illustrated in Fig. 11B.
- the status control 1105 may be updated to indicate that the video content will be resuming.
- the display apparatus 100 may perform the operations above even when other users 130 are in proximity of the display apparatus 100. For example, in an initial state, a number of users 130 that includes a primary user 130 may be in proximity of the display apparatus. When the primary user is subsequently determined to not be in proximity of the display apparatus, the video content may be paused, as described above. When the primary user is subsequently determined to be in proximity of the display apparatus, the video content may be resumed.
- Fig. 12 illustrates alternative exemplary operations for automatically pausing video content. The operations of Fig. 12 are better understood with reference to Fig. 13A-13D.
- the display apparatus 100 may determine whether a user is in proximity of the display apparatus.
- the imager 151 of the display apparatus 100 may capture images in front of the display apparatus.
- the CPU 150 alone or in cooperation with the AI processor 165 may control the imager 151 to capture an image, analyze the captured image to identify face data in the image, and compare the face data with face data associated the user to determine whether the user is in proximity of the display apparatus 100.
- face data associated with the user 130 may have been previously captured by the display apparatus 100 during, for example, an initial setup routine.
- the presence of the user 130 may be determined based on near field communication circuitry of a device carried the user 130, as described above.
- a user is determined to be in proximity of the display apparatus 100, then one or more program types associated with the user 130 are determined.
- the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques to determine program types associated with the user 130.
- information that identifies the user 130 may be communicated to the support server 127 and the support server 127 may determine program types associated with the user.
- the machine learning techniques may determine the program types associated with the user 130 by, for example, analyzing a history of programs viewed by the user 130, by receiving information from social media servers 120 related to likes and dislikes of the user, and/or by another manner.
- programs that are available for watching at the time of user detection or within a predetermined time later may be determined.
- a predetermined time later e.g. 30 minutes
- metadata associated with available video content may be analyzed to determine whether any of the video content is related to the user associated program types determined above.
- the user interface 1300 may be updated to present information 1305 related to available programs that match the user associated program types.
- the user interfaces 1300 may include controls that facilitate watching one of the available programs, recording the available programs, etc.
- a group of users 130 may be detected within proximity of the display apparatus 100 and the program types determined at block 1205 may be based on the intersection of program types associated with two or more of the users 130.
- the user interface 1300 may be updated to depict information 1305 related to available programs that match the intersection of user associated program types.
- the operations above may be performed spontaneously when a user 130 is detected.
- a first user 130 may be viewing video content on the display apparatus 100 when a second user comes within proximity of the display apparatus.
- the operations performed above may occur after detection of the second user.
- the operations above may be performed immediately after powering on the display apparatus 100.
- the operations may be performed after a power off indication has been received.
- the display apparatus 100 may either power up after having been off or may cancel a power off operation, and the user interface 1300 may be updated to depict a minimal amount of information so as not to cause too much of a distraction.
- the user interface 1300 may merely depict an informational control 1305 to make the user 130 aware of, for example, an upcoming program.
- a control 1310 may be provided to allow the user 130 to bring the display apparatus 100 into fully powered up condition to facilitate watching the program.
- one or more information types associated with the user 130 may be determined and the user interface 1300 may be updated to depict information associated with the determined information types.
- the user 130 may have been determined to be interested in knowing the weather.
- the display apparatus 100 may be powered up in a minimal power state and an informational control 1305 that displays information related to the weather may be depicted.
- the informational control 1305 may be updated to display information related to an upcoming television episode, as illustrated in Fig. 13D.
- the display apparatus 100 may power down.
- Fig. 14 illustrates exemplary operations for adjusting various smart appliances based on a detected routine of a user 130. The operations of Fig. 14 are better understood with reference to Fig. 15A-15B.
- the display apparatus 100 may receive data that relates the state of various smart appliances 117 and display apparatus 100 usage. For example, data that relates light switches, timers, drapery controllers, and other smart appliances 117 that were previously related to display apparatus 100 usage may be received.
- communication circuitry of the display apparatus 100 may continuously receive state information from smart appliances 117.
- the support database 153 may store the state information of the smart appliances 117 along with usage information of the display apparatus 100.
- the CPU 150 may correlate the state information of the smart appliances 117 and the usage information of the display apparatus 100 to form a relation between the state of the smart appliances and the display apparatus usage. The relation may be indicative of a routine that the user 130 follows in watching video content on the display apparatus 100.
- the state information may define an activation state of the smart appliance 117. For example, whether a smart light was on, off, or dimmed to a particular setting such as 50%. Other information may include whether smart drapes were closed, partially closed, etc.
- the usage information may define times of usage of the display apparatus, program types viewed on the display apparatus, lists of specific users of the display apparatus, and specific characteristics of the display apparatus 100 such as volume, contrast, and brightness of the display apparatus, etc.
- the display apparatus usage may be determined, and at block 1410, corresponding states for one or more smart appliances 117 may be determined based on the received data.
- the display apparatus usage may indicate that the display apparatus 100 is set to a movie channel, that the picture mode has been set to a theatre mode and that the display apparatus 100 is being used in the evening on a Friday night.
- the smart appliance state/display apparatus usage correlation data may indicate that under these conditions, the lights of the room where the display apparatus 100 is located are typically off and that the blinds are closed.
- the state of the various smart appliances may be set according to the state determined at block 1410.
- the CPU 150 may, via the communication circuitry of the display apparatus 100, adjust the various smart appliances 117.
- the user interface 1500 may include an informational control 1505 to notify the user 130 that a routine was detected.
- the user interface 1500 may note that the display apparatus 100 is in a “theatre mode” and that a smart bulb is controlled when the display apparatus 100 is in this mode.
- the user interface 1500 may be updated to provided details related to the detected routine such as a name assigned for the routine (e.g., “Movie Time 8PM” ) , a time when the mode “theatre mode” was entered (e.g., 8: 01 PM) and a setting to set the smart appliance to (e.g., 10%) .
- a name assigned for the routine e.g., “Movie Time 8PM”
- a time when the mode “theatre mode” was entered e.g., 8: 01 PM
- a setting to set the smart appliance e.g. 10%
- Fig. 16 illustrates a computer system 1600 that may form part of or implement the systems, environments, devices, etc., described above.
- the computer system 1600 may include a set of instructions 1645 that the processor 1605 may execute to cause the computer system 1600 to perform any of the operations described above.
- the computer system 1600 may operate as a stand-alone device or may be connected, e.g., using a network, to other computer systems or peripheral devices.
- the computer system 1600 may operate in the capacity of a server or as a client computer in a server-client network environment, or as a peer computer system in a peer-to-peer (or distributed) environment.
- the computer system 1600 may also be implemented as or incorporated into various devices, such as a personal computer or a mobile device, capable of executing instructions 1645 (sequentially or otherwise) causing a device to perform one or more actions .
- each of the systems described may include a collection of subsystems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer operations.
- the computer system 1600 may include one or more memory devices 1610 communicatively coupled to a bus 1620 for communicating information.
- code operable to cause the computer system to perform operations described above may be stored in the memory 1610.
- the memory 1610 may be a random-access memory, read-only memory, programmable memory, hard disk drive or any other type of memory or storage device.
- the computer system 1600 may include a display 1630, such as a liquid crystal display (LCD) , a cathode ray tube (CRT) , or any other display suitable for conveying information.
- a display 1630 such as a liquid crystal display (LCD) , a cathode ray tube (CRT) , or any other display suitable for conveying information.
- the display 1630 may act as an interface for the user to see processing results produced by processor 1605.
- the computer system 1600 may include an input device 1625, such as a keyboard or mouse or touchscreen, configured to allow a user to interact with components of system 1600.
- an input device 1625 such as a keyboard or mouse or touchscreen, configured to allow a user to interact with components of system 1600.
- the computer system 1600 may also include a disk or optical drive unit 1615.
- the drive unit 1615 may include a computer-readable medium 1640 in which the instructions 1645 may be stored.
- the instructions 1645 may reside completely, or at least partially, within the memory 1610 and/or within the processor 1605 during execution by the computer system 1600.
- the memory 1610 and the processor 1605 also may include computer-readable media as discussed above.
- the computer system 1600 may include a communication interface 1635 to support communications via a network 1650.
- the network 1650 may include wired networks, wireless networks, or combinations thereof.
- the communication interface 1635 may enable communications via any number of communication standards, such as 802.11, 802.12, 802.20, WiMAX, cellular telephone standards, or other communication standards.
- methods and systems described herein may be realized in hardware, software, or a combination of hardware and software.
- the methods and systems may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein may be employed.
- Computer program refers to an expression, in a machine-executable language, code or notation, of a set of machine-executable instructions intended to cause a device to perform a particular function, either directly or after one or more of a) conversion of a first language, code, or notation to another language, code, or notation; and b) reproduction of a first language, code, or notation.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A display apparatus includes a user input circuitry for receiving user commands and a display for outputting video content and a user interface. The video content includes metadata. The apparatus also includes a processor in communication with the user input circuitry and the display, and non-transitory computer readable media in communication with the processor that stores instruction code. The instruction code is executed by the processor and causes the processor to receive, from the user input circuitry, a first scene command to search for scenes in the video content of a scene type. The processor determines, from the metadata, one or more scenes in the video content related to the scene type. The processor then updates the user interface to depict one or more scene images related to the one or more scenes related to the scene type.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
The present application claims priority to US Patent Application No. 15/985,206, filed on May 21, 2018, US Patent Application No. 15/985,292, filed on May 21, 2018, US Patent Application No. 15/985,251, filed on May 21, 2018, US Patent Application No. 15/985,273, filed on May 21, 2018, US Patent Application No. 15/985,303, filed on May 21, 2018, US Patent Application No. 15/985,338, filed on May 21, 2018, and US Patent Application No. 15/985,325, filed on May 21, 2018, which are hereby incorporated by reference in their entirety for all purposes.
This application generally relates to a display apparatus such as a television. In particular, this application describes a display apparatus with an intelligent user interface.
The current breed of higher end televisions typically include network connectivity to facilitate streaming video content from content servers such as
etc. In some cases, the televisions utilize operating systems such as
that facilitate execution of apps for other purposes.
Access to the ever-increasing number of new features requires changes to the user interface. Unfortunately, access to these newer features often times results in user interfaces that are frustratingly complex and difficult to navigate.
SUMMARY
In a first aspect, a display apparatus includes a user input circuitry for receiving user commands and a display for outputting video content and a user interface. The video content includes metadata. The apparatus also includes a processor in communication with the user input circuitry and the display, and non-transitory computer readable media in communication with the processor that stores instruction code. The instruction code is executed by the processor and causes the processor to receive, from the user input circuitry, a first scene command to search for scenes in the video content of a scene type. The processor determines, from the metadata, one or more scenes in the video content related to the scene type. The processor then updates the user interface to depict one or more scene images related to the one or more scenes related to the scene type.
In a second aspect, a method for controlling a display apparatus includes receiving, via user input circuitry, user commands, and outputting, via a display, video content and a user interface. The video content includes metadata. The method includes receiving, from the user input circuitry, a first scene command to search for scenes in the video content of a scene type; determining, from the metadata, one or more scenes in the video content related to the scene type; and updating the user interface to depict one or more scene images related to the one or more scenes related to the scene type.
In a third aspect, a non-transitory computer readable media that stores instruction code for controlling a display apparatus is provided. The instruction code is executable by a computer for causing the computer to receive, from user input circuitry, a first scene command to search for scenes in the video content of a scene type; determine, from metadata of video content, one or more scenes in the video content related to the scene type; and update a user interface to depict one or more scene images related to the one or more scenes related to the scene type.
In a fourth aspect, a display apparatus includes a user input circuitry for receiving user commands; and a display for displaying video content and a user interface. The apparatus also includes a processor in communication with the user input circuitry, the display, and a search history database; and non-transitory computer readable media in communication with the processor that stores instruction code. The instruction code is executed by the processor and causes the processor to receive, from the user input circuitry, a first search command. The processor determines from the first search command one or more potential search commands related to the first search command. The processor then updates the user interface to depict one or more of the potential search commands and receive, from the user input circuitry, a second search command that corresponds to one of the one or more potential search commands. The processor determines video content associated with the first and second search commands; and updates the user interface to depict one or more controls, each being associated with different video content of the determined video content.
Optionally, the instruction code causes the processor to update the user interface to depict unique identifiers over each of the one or more controls; receive, from the user input circuitry, a third search command that specifies one of the unique identifiers; and display video content associated with the specified unique identifier.
Optionally, the first and second search commands correspond to voice commands, and the instruction code causes the processor to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice commands.
Optionally, the instruction code causes the processor to update the search history database to reflect the fact that the second search command was selected to thereby increase a likelihood that the second search command will be predicted during a subsequent search.
Optionally, the instruction code causes the processor to predict the one or more potential search commands based at least in part on a history of search commands specified by the user stored in the search history database.
Optionally, the instruction code causes the processor to update the user interface to depict a phrase that corresponds to the first and second search commands, where the phrase is updated in real-time as the user specifies different search commands.
In a fifth aspect, a method for controlling a display apparatus includes receiving, via user input circuitry, user commands; displaying video content and a user interface; and receiving, from the user input circuitry, a first search command. The method further includes determining from the first search command one or more potential search commands related to the first search command; updating the user interface to depict one or more of the potential search commands; and receiving, from the user input circuitry, a second search command that corresponds to one of the one or more potential search commands. The method also includes determining video content associated with the first and second search commands; and updating the user interface to depict one or more controls, each being associated with different video content of the determined video content.
Optionally, the method further includes updating the user interface to depict unique identifiers over each of the one or more controls; receiving, from the user input circuitry, a third search command that specifies one of the unique identifiers; and displaying video content associated with the specified unique identifier.
Optionally, the first and second search commands correspond to voice commands, and the method further includes implementing a natural language processor; and determining, via the natural language processor, a meaning of the voice commands.
Optionally, the method further includes updating the search history database to reflect the fact that the second search command was selected to thereby increase a likelihood that the second search command will be predicted during a subsequent search.
Optionally, the method further includes predicting the one or more potential search commands based at least in part on a history of search commands specified by the user stored in the search history database.
Optionally, the method further includes updating the user interface to depict a phrase that corresponds to the first and second search commands, where the phrase is updated in real-time as the user specifies different search commands.
In a sixth aspect, a non-transitory computer readable media that stores instruction code for controlling a display apparatus is provided. The instruction code is executable by a computer for causing the computer to receive, from user input circuitry of the computer, a first search command; determine from the first search command one or more potential search commands related to the first search command; update a user interface of the computer to depict one or more of the potential search commands; receive, from the user input circuitry, a second search command that corresponds to one of the one or more potential search commands; determine video content associated with the first and second search commands; and update the user interface to depict one or more controls, each being associated with different video content of the determined video content.
Optionally, the instruction code causes the computer to update the user interface to depict unique identifiers over each of the one or more controls; receive, from the user input circuitry, a third search command that specifies one of the unique identifiers; and display video content associated with the specified unique identifier.
Optionally, the first and second search commands correspond to voice commands, and the instruction code causes the computer to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice commands.
Optionally, the instruction code causes the computer to update the search history database to reflect the fact that the second search command was selected to thereby increase a likelihood that the second search command will be predicted during a subsequent search.
Optionally, the instruction code causes the computer to predict the one or more potential search commands based at least in part on a history of search commands specified by the user stored in the search history database.
Optionally, the instruction code causes the computer to update the user interface to depict a phrase that corresponds to the first and second search commands, where the phrase is updated in real-time as the user specifies different search commands.
In a seventh aspect, a display apparatus includes user input circuitry for receiving user commands and a display for outputting video content and a user interface. The video content includes metadata. The apparatus also includes a processor in communication with the user input circuitry and the display, and non-transitory computer readable media in communication with the processor that stores instruction code. The instruction code is executed by the processor and causes the processor to receive, from the user input circuitry, a query regarding an image of the video content currently displayed on the display; determine one or more objects of the image associated with the query based on the metadata; update the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
Optionally, the instruction code causes the processor to determine one or more potential second queries related to the first query and the determined one or more objects; update the user interface to depict one or more of the one or more potential second queries; receive, from the user input circuitry, a second query that corresponds to one of the one or more potential second queries; determine one or more objects of the image associated with the first and the second query based on the metadata; update the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
Optionally, the instruction code causes the processor to update the user interface to depict unique identifiers over each of the one or more controls; receive, from the user input circuitry, a command that specifies one of the unique identifiers; and display information associated with the selection that is associated with the specified unique identifier.
Optionally, the query and the selection correspond to voice commands, and the instruction code causes the processor to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice commands.
Optionally, the metadata defines a hierarchy of queries.
Optionally, each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
Optionally, the instruction code causes the processor to update the user interface to depict a phrase that corresponds to the first and second queries, where the phrase is updated in real-time as the user specifies different queries.
Optionally, the video content continues to stream while the display depicts the one or more controls and the information related to the selection.
In an eighth aspect, a method for controlling a display apparatus includes receiving, via user input circuitry, user commands; and displaying video content and a user interface. The video content includes metadata. The method includes receiving, from the user input circuitry, a query regarding an image of the video content currently displayed; determining one or more objects of the image associated with the query based on the metadata; updating the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receiving a selection of one of the controls; and updating the user interface to depict information related to the selection.
Optionally, the method further includes determining one or more potential second queries related to the first query and the determined one or more objects; updating the user interface to depict one or more of the one or more potential second queries; receiving, from the user input circuitry, a second query that corresponds to one of the one or more potential second queries; determining one or more objects of the image associated with the first and the second query based on the metadata; updating the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receiving a selection of one of the controls; and updating the user interface to depict information related to the selection.
Optionally, the method further includes updating the user interface to depict unique identifiers over each of the one or more controls; receiving, from the user input circuitry, a command that specifies one of the unique identifiers; and displaying information associated with the selection that is associated with the specified unique identifier.
Optionally, the query and the selection correspond to voice commands, and the method further includes implementing a natural language processor; and determining, via the natural language processor, a meaning of the voice commands.
Optionally, the metadata defines a hierarchy of queries.
Optionally, each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
Optionally, the method further includes depicting a phrase that corresponds to the first and second queries, where the phrase is updated in real-time as the user specifies different queries.
Optionally, the video content continues to stream while the one or more controls and the information related to the selection are depicted.
In a ninth aspect, a non-transitory computer readable media that stores instruction code for controlling a display apparatus is provided. The instruction code is executable by a computer for causing the computer to receive, from a user input circuitry of the computer, a query regarding an image of video content currently depicted on a display of the computer; determine one or more objects of the image associated with the query based on metadata; update a user interface of the computer to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
Optionally, the instruction code causes the computer to determine one or more potential second queries related to the first query and the determined one or more objects; update the user interface to depict one or more of the one or more potential second queries; receive, from the user input circuitry, a second query that corresponds to one of the one or more potential second queries; determine one or more objects of the image associated with the first and the second query based on the metadata; update the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
Optionally, the instruction code causes the computer to update the user interface to depict unique identifiers over each of the one or more controls; receive, from the user input circuitry, a command that specifies one of the unique identifiers; and display information associated with the selection that is associated with the specified unique identifier.
Optionally, the query and the selection correspond to voice commands, and the instruction code causes the computer to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice commands.
In a tenth aspect, a display apparatus includes user input circuitry for receiving user commands and a display for outputting video content and a user interface. The video content includes metadata. The apparatus also includes a processor in communication with the user input circuitry and the display, and non-transitory computer readable media in communication with the processor that stores instruction code. The instruction code is executed by the processor and causes the processor to receive a pause command from a user to thereby pause the video content so that the display depicts a still image; subsequently determine one or more objects in the still image based on the metadata; update the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
Optionally, each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
Optionally, the controls include at least one of: an advertisement related to one of the objects, a share control to share the video content, a rating control to rate the video content, and an information control to display information related to one of the objects.
Optionally, the depicted information related to the selection includes a QR code associated with a URL to information related to the selection.
In a eleventh aspect, a method for controlling a display apparatus includes receiving, via user input circuitry, user commands; and displaying video content and a user interface., The video content includes metadata. The method includes receiving a pause command from a user to thereby pause the video content so that a still image is depicted; subsequently determining one or more objects in the still image based on the metadata; updating the user interface to depict one or more controls, each control being associated with one of the determined one or more objects; receiving a selection of one of the controls; and updating the user interface to depict information related to the selection.
Optionally, each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
Optionally, the controls include at least one of: an advertisement related to one of the objects, a share control to share the video content, a rating control to rate the video content, and an information control to display information related to one of the objects.
Optionally, the depicted information related to the selection includes a QR code associated with a URL to information related to the selection.
In a twelfth aspect, a non-transitory computer readable media that stores instruction code for controlling a display apparatus is provided. The instruction code is executable by a computer for causing the computer to receive a pause command from a user to thereby pause video content so that a display of the computer depicts a still image; subsequently determine one or more objects in the still image based on metadata of the video content; update a user interface of the computer to depict one or more controls, each control being associated with one of the determined one or more objects; receive a selection of one of the controls; and update the user interface to depict information related to the selection.
Optionally, each of the one or more controls corresponds to an image associated with an object of the one or more determined objects.
Optionally, the controls include at least one of: an advertisement related to one of the objects, a share control to share the video content, a rating control to rate the video content, and an information control to display information related to one of the objects.
Optionally, the depicted information related to the selection includes a QR code associated with a URL to information related to the selection.
In a thirteenth aspect, a display apparatus includes presence detection circuitry for detecting an individual in proximity to the display apparatus; a display for displaying video content and a user interface; a processor in communication with the presence detection circuitry and the display; and non-transitory computer readable media in communication with the processor that stores instruction code, which when executed by the processor, causes the processor to determine, from the presence detection circuitry, whether a user is in proximity of the display apparatus; when the user is determined to not be in proximity of the display apparatus, cause the video content to pause; and when the user is determined to subsequently be in proximity of the display apparatus, cause the video content to resume.
Optionally, the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the instruction code causes the processor to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
Optionally, in an initial state, a plurality of users that includes a primary user are in proximity of the display apparatus, when the primary user is subsequently determined to not be in proximity of the display apparatus, the video content is paused and the user interface is updated to indicate that the video content is paused; and when the primary user is subsequently determined to be in proximity of the display apparatus the video content is resumed and the user interface is updated to indicate that the video content is resumed.
Optionally, the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the instruction code causes the processor to determine whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
Optionally, when the video content is paused, the processor updates the user interface on the display apparatus to indicate that the video content is paused; and when the video content is resumed, the processor updates the user interface on the display apparatus to indicate that the video content is resumed.
Optionally, when the user interface indicates that the video content is paused, the user interface is updated to depict information related to a content of the video.
Optionally, the information related to the content of the video includes advertising information related to the content.
In a fourteenth aspect, a method for controlling a display apparatus includes displaying video content and a user interface; determining, from presence detection circuitry configured to detect an individual in proximity to the display apparatus, whether a user is in proximity of the display apparatus; when the user is determined to not be in proximity of the display apparatus, pausing the video content; and when the user is determined to subsequently be in proximity of the display apparatus, resuming the video content.
Optionally, the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the method further includes periodically causing the imager to capture an image; analyzing the captured image to identify face data; and comparing the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
Optionally, in an initial state, a plurality of users that includes a primary user are in proximity of the display apparatus, and the method further includes when the primary user is subsequently determined to not be in proximity of the display apparatus, pausing the video content and updating the user interface to indicate that the video content is paused; and when the primary user is subsequently determined to be in proximity of the display apparatus, resuming the video content and updating the user interface to indicate that the video content is resumed.
Optionally, the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the method further includes determining whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
Optionally, the method further includes when the video content is paused, updating the user interface on the display apparatus to indicate that the video content is paused; and when the video content is resumed, updating the user interface on the display apparatus to indicate that the video content is resumed.
Optionally, when the user interface indicates that the video content is paused, the method includes updating the user interface to depict information related to a content of the video.
Optionally, the information related to the content of the video includes advertising information related to the content.
In a fifteenth aspect, a non-transitory computer readable media that stores instruction code for controlling a display apparatus is provided. The instruction code is executable by a computer for causing the computer to determine, via presence detection circuitry of the computer that is configured to detect an individual in proximity to the display apparatus, whether a user is in proximity of the display apparatus; when the user is determined to not be in proximity of the display apparatus, pause the video content; and when the user is determined to subsequently be in proximity of the display apparatus, resume the video content.
Optionally, the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the instruction code causes the computer to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
Optionally, in an initial state, a plurality of users that includes a primary user are in proximity of the display apparatus, when the primary user is subsequently determined to not be in proximity of the display apparatus, the instruction code causes the computer to pause the video content and update the user interface to indicate that the video content is paused; and when the primary user is subsequently determined to be in proximity of the display apparatus, the instruction code causes the computer to resume the video content and update the user interface to indicate that the video content is resumed.
Optionally, the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the instruction code causes the computer to determine whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
Optionally, when the video content is paused, the instruction code causes the computer to update the user interface on the display apparatus to indicate that the video content is paused; and when the video content is resumed, the instruction code causes the computer to update the user interface on the display apparatus to indicate that the video content is resumed.
Optionally, when the user interface indicates that the video content is paused, the instruction code causes the computer to update the user interface to depict information related to a content of the video.
In a sixteenth aspect, a display apparatus includes presence detection circuitry for detecting an individual in proximity to the display apparatus; a display for displaying video content and a user interface; a processor in communication with user input circuitry, the display, and a search history database; and non-transitory computer readable media in communication with the processor that stores instruction code, which when executed by the processor, causes the processor to a) determine, from the presence detection circuitry, a user in proximity of the display apparatus; b) determine one or more program types associated with the user; c) determine available programs that match the determined one or more program types; and d) update the user interface to depict a listing of one or more of the available programs that match the determined one or more program types.
Optionally, the instruction code causes the processor to receive a power on command from the user to cause the display apparatus to enter a viewing state; and perform operations a) -d) described in the above aspect after receiving the power on command, but before receiving any subsequent commands from the user.
Optionally, the instruction code causes the processor to determine, from the presence detection circuitry, a plurality of users in proximity of the display apparatus; predict one or more program types associated with the plurality of users based on a history of program types previously viewed by the plurality of users stored in the search history database; determine from the predicted one or more program types, common program types common to the each of the plurality of users; determine available programs that match the common program types; and update the user interface to depict a listing of one or more of the available programs that match the common program types.
Optionally, the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the instruction code causes the processor to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
Optionally, the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the instruction code causes the processor to determine whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
Optionally, the display apparatus further includes the user input circuitry for receiving user commands, and the instruction code causes the processor to receive a command to select one of the available programs; and cause video content associated with the selected available program to be displayed on the display.
Optionally, the command corresponds to a voice command, and the instruction code causes the processor to implement a natural language processor; and determine, via the natural language processor, a meaning of the voice command.
Optionally, determination of one or more program types associated with the user is based on a history of program types previously viewed by the user stored in the search history database in communication with the display apparatus.
Optionally, the instruction code causes the processor to receive a power off command from a user to thereby cause the display apparatus to enter a lower power state and to deactivate the display; perform operations a) -d) described in the above aspect after receiving the power off command, but before receiving any subsequent commands from the user; and deactivate the display after a predetermined time when no user indication to power on the display apparatus is detected.
Optionally, after deactivation of the display and before the predetermined time, the instruction code causes the processor to predict one or more information types associated with the user; and update the user interface to depict information associated with the predicted one or more information types.
In a seventeenth aspect, a method for controlling a display apparatus includes a) providing presence detection circuitry for detecting an individual in proximity to the display apparatus; b) displaying video content and a user interface; c) determining, from the presence detection circuitry, a user in proximity of the display apparatus; d) determining one or more program types associated with the user; e) determining available programs that match the determined one or more program types; and f) updating the user interface to depict a listing of one or more of the available programs that match the determined one or more program types.
Optionally, the method further includes receiving a power on command from the user to cause the display apparatus to enter a viewing state; and performing operations c) -f) after receiving the power on command, but before receiving any subsequent commands from the user.
Optionally, the method further includes determining, from the presence detection circuitry, a plurality of users in proximity of the display apparatus; predicting one or more program types associated with the plurality of users based on a history of program types previously viewed by the plurality of users stored in a search history database; determining from the predicted one or more program types, common program types common to the each of the plurality of users; determining available programs that match the common program types; and updating the user interface to depict a listing of one or more of the available programs that match the common program types.
Optionally, the presence detection circuitry includes an imager for capturing images in front of the display apparatus, and the method further includes periodically causing the imager to capture an image; analyzing the captured image to identify face data; and comparing the face data with face data associated with the user to determine whether the user is in proximity of the display apparatus.
Optionally, the presence detection circuitry includes near field communication circuitry for performing near field communications with a device that is in proximity of the display apparatus, and the method further includes determining whether the user is in proximity of the display apparatus by detecting near field communications from a portable device associated with the user.
Optionally, the method further includes receiving, via user input circuitry, user commands, receiving a command to select one of the available programs; and causing video content associated with the selected available program to be displayed on the display apparatus.
Optionally, the command corresponds to a voice command, and the method further includes implementing a natural language processor; and determining, via the natural language processor, a meaning of the voice command.
Optionally, determination of the one or more program types associated with the user is based on a history of program types previously viewed by the user stored in the search history database in communication with the display apparatus.
Optionally, the method further includes receiving a power off command from the user to thereby cause the display apparatus to enter a lower power state and to deactivate a display; performing operations c) -f) after receiving the power off command, but before receiving any subsequent commands from the user; and deactivate the display after a predetermined time when no user indication to power on the display apparatus is detected.
Optionally, after deactivation of the display and before the predetermined time, the method further includes predict one or more information types associated with the user; and update the user interface to depict information associated with the predicted one or more information types.
In an eighteenth aspect, a display apparatus includes a display for displaying video content and a user interface; a processor in communication with presence detection circuitry and the display; and non-transitory computer readable media in communication with the processor that stores instruction code. The instruction code is executed by the processor and causes the processor to receive data that relates a smart appliance state to display apparatus usage. The processor also determines current display apparatus usage; and determines a proposed smart appliance state corresponding to the current display apparatus usage based on the received data. The processor adjusts the smart appliance to the determined state.
Optionally, the smart appliance state defines an activation state of the smart appliance, and the display apparatus usage defines one or more of: a time of usage of the display apparatus, a program type viewed on the display apparatus, and a specific user of the display apparatus.
Optionally, the display apparatus includes the presence detection circuitry for detecting a specific user in proximity to the display apparatus, and the presence detection circuitry includes an imager for capturing images in front of the display apparatus, where the instruction code causes the processor to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with a plurality of users to determine whether the specific user is in proximity of the display apparatus.
Optionally, the display apparatus includes communication circuitry for receiving new state information from smart appliances, and a database for storing the new state information of the smart appliances and information that defines new display apparatus usage of the display apparatus, where the instruction code causes the processor to continuously update the database with the new state information of the smart appliances and the new display apparatus usage information of the display apparatus; and correlate the new state information of the smart appliances and the new display apparatus usage information associated with the display apparatus to form the relation between the smart appliances state and the display apparatus usage.
In a nineteenth aspect, a method for controlling a display apparatus includes displaying video content and a user interface and receiving data that relates a smart appliance state to display apparatus usage. The method also includes determining current display apparatus usage and determining a proposed smart appliance state corresponding to the current display apparatus usage based on the received data. The method also includes adjusting the smart appliance to the determined state.
Optionally, the smart appliance state defines an activation state of the smart appliance, and the display apparatus usage defines one or more of: a time of usage of the display apparatus, a program type viewed on the display apparatus, and a specific user of the display apparatus.
Optionally, the display apparatus includes presence detection circuitry for detecting a specific user in proximity to the display apparatus, and the presence detection circuitry includes an imager for capturing images in front of the display apparatus, where the method further includes periodically causing the imager to capture an image; analyzing the captured image to identify face data; and comparing the face data with face data associated with a plurality of users to determine whether the specific user is in proximity of the display apparatus.
Optionally, the display apparatus includes communication circuitry for receiving new state information from smart appliances, and a database for storing the new state information of the smart appliances and information that defines new display apparatus usage of the display apparatus, where the method further includes continuously updating the database with the new state information of the smart appliances and the new display apparatus usage information of the display apparatus; and correlating the new state information of the smart appliances and the new display apparatus usage information associated with the display apparatus to form the relation between the smart appliances state and the display apparatus usage.
In a twentieth aspect, a non-transitory computer readable media that stores instruction code for controlling a display apparatus is provided. The instruction code is executable by a computer for causing the computer to receive data that relates a smart appliance state to display apparatus usage; determine current display apparatus usage; determine a proposed smart appliance state corresponding to the current display apparatus usage based on the received data; and adjust the smart appliance to the determined state.
Optionally, the smart appliance state defines an activation state of the smart appliance, and the display apparatus usage defines one or more of: a time of usage of the display apparatus, a program type viewed on the display apparatus, and a specific user of the display apparatus.
Optionally, the display apparatus includes presence detection circuitry for detecting a specific user in proximity to the display apparatus, and the presence detection circuitry includes an imager for capturing images in front of the display apparatus, where the instruction code causes the computer to periodically cause the imager to capture an image; analyze the captured image to identify face data; and compare the face data with face data associated with a plurality of users to determine whether the specific user is in proximity of the display apparatus.
Optionally, the display apparatus includes communication circuitry for receiving new state information from smart appliances, and a database for storing the new state information of the smart appliances and information that defines new display apparatus usage of the display apparatus, where the instruction code causes the computer to continuously update the database with the new state information of the smart appliances and the new display apparatus usage information of the display apparatus; and correlate the new state information of the smart appliances and the new display apparatus usage information associated with the display apparatus to form the relation between the smart appliances state and the display apparatus usage.
Fig. 1 illustrates an exemplary environment in which a display apparatus operates;
Fig. 2 illustrates exemplary operations for enhancing navigation of video content.
Figs. 3A-3C illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 2;
Fig. 4 illustrates exemplary operations that facilitate locating a particular type of video content;
Fig. 5 illustrates an exemplary user interface that may be presented to a user during the operations of Fig 4;
Fig. 6 illustrates exemplary operations for determining information related to images in video content.
Figs. 7A and 7B illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 6;
Fig. 8 illustrates alternative exemplary operations for determining information related to images in video content;
Figs. 9A and 9B illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 8;
Fig. 10 illustrates alternative exemplary operations for automatically pausing video content;
Figs. 11A and 11B illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 10;
Fig. 12 illustrates alternative exemplary operations for automatically pausing video content;
Figs. 13A-13D illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 12;
Fig. 14 illustrates exemplary operations for adjusting various smart appliances based on a detected routine of a user;
Figs. 15A-15B illustrate exemplary user interfaces that may be presented to a user during the operations of Fig 14; and
Fig. 16 illustrates an exemplary computer system that may form part of or implement the systems described in the figures or in the following paragraphs.
DETAILED DESCRIPTION OF THE EMBODIMENTS
The embodiments described below are directed to various user interface implementations that facilitate access to television features in an intelligent, easy to use manner. Generally, the user interfaces rely on various machine learning techniques that facilitate access to these features and other information with a minimum number of steps. The user interfaces are configured to be intuitive, with minimal learning time required to become proficient in navigating the user interfaces.
Fig. 1 illustrates an exemplary environment in which a display apparatus operates. Illustrated are, the display apparatus 100, a group of mobile devices 105, a GPS network 110, a computer network 115, a group of social media servers 120, a group of content servers 125, a support server 127, and one or more users that may view and/or interact with the display apparatus 100. The display apparatus 100, social media servers 120, content servers 125, and support server 127 may communicate with one another via a network 107 such as the Internet, a cable network, a satellite network, etc.
The social media servers 120 correspond generally to computer systems hosting publicly available information that may be related to the users 130 of the display apparatus 100. For example, the social media servers 120 may be
etc. The social media servers 120 may include blogs, forums, and/or any other systems or websites from which information related to the users 130 may be obtained.
The mobile devices 105 may correspond to mobile phones, tablets, etc. carried by one or more of the users 130. The mobile devices 105 may include short range communication circuitry that facilitates direct communication with the display apparatus 100. For example, the mobile devices 105 may include Bluetooth
circuitry, nearfield comminution circuitry, etc. The communication circuitry facilities detection of a given mobile device 105 when it is in the proximity of display apparatus 100. This in turn may facilitate determination, by the display apparatus 100, of the presence of a user 130 within viewing distance of the display apparatus 100.
The GPS network 110 and computer network 115 may communicate information to the display apparatus 100 that may in turn facilitate determination, by the display apparatus 100, of the general location of display apparatus 100. For example, the GPS network 110 may communicate information that facilitates determining a relatively precise location of the display apparatus 100. The computer network 115 may assign an IP address to the display apparatus 100 that may be associated with a general location, such as a city or other geographic region.
The content servers 125 correspond generally to computer systems hosting video content. For example, the content servers 125 may correspond to head-end equipment operated by a cable television provider, network provider, etc. The content servers 125 may in some cases store video content such as movies, television shows, sports programs, etc.
In some cases, video content may include metadata that defines various aspects of the video content. For example, metadata associated with a sports matchup may include information timestamps, still images, etc. related to various events of the match, such as goals, penalties, etc. The metadata may include information associated with different individuals depicted in the video content such as the names of players, coaches, etc.
The metadata in the video content may include information that facilitates determining whether the video content is of a particular type (e.g., comedy, drama, sports, adventure, etc. ) . The metadata may include information associated with different individuals depicted in the video content such as the names of actors shown in the video content. The metadata may include information associated with different objects depicted in the video content such as garments worn by individuals, personal items carried by the individuals, and various objects that may be depicted in the video content.
The metadata may have been automatically generated beforehand by various machine learning techniques for identifying individuals, scenes, events, etc. in the video content. In addition or alternatively, the machine learning techniques may use some form of human assistance in making this determination.
The support server 127 corresponds generally to computer system configured to provide advanced services to the display apparatus 100. For example, support server 127 may correspond to high-end computer that configured to perform various machine learning technique for determining the meaning of voice commands, predicting responses to the voice commands, etc. The support server 127 may receive voice commands and other types of commands from the display apparatus 100 and communicate responses associated with the commands back to the display apparatus.
The display apparatus 100 may correspond to a television or other viewing device with enhanced user interface capabilities. The display apparatus 100 may include a CPU 150, a video processor 160, an I/O interface 155, an AI processor 165, a display 175, a support database 153, and instruction memory 170.
The CPU 150 may correspond to processor such as an
etc. based processor. The CPU 150 may execute an operating system, such as
Linux
or other operating system suitable for execution within a display apparatus. Instructions code associated with the operating system and for controlling various aspects of the display apparatus 100 may be stored within the instruction memory 170. For example, instruction code stored in the instruction memory 170 may facilitate controlling the CPU 150 to communicate information to and from the I/O interface 155. The CPU 150 may process video content received from the I/O interface 155 and communicate the processed video content to the display 175. The CPU 150 may generate various user interfaces that facilitate controlling different aspects of the display apparatus.
The I/O interface 155 is configured to interface with various types of hardware and to communicate information received from the hardware to the CPU. For example, the I/O interface 155 may be coupled to one or more antenna’s that facilitate receiving information from the mobile terminals 105, GPS network 110, computer network 115, smart appliances 117, etc. The I/O interface may be coupled to an imager 151 arranged on the face of the display apparatus 100 to facilitate capturing images of individuals near the display apparatus. The I/O interface may be coupled to one or more microphones 152 arranged on the display apparatus 100 to facilitate capturing voice instructions that may be conveyed by the users 130.
The AI processor 165 may correspond to a processor specifically configured to perform AI operations such as natural language processing, still and motion image processing, voice processing, etc. For example, the AI processor 165 may be configured to perform voice recognition to recognize voice commands received through the microphone. The AI processor 165 may include face recognition functionality to identify individuals in images captured by the imager. In some implementations, the AI processor 165 may be configured to analyze content communicated from one or more content servers to identify objects within the content.
Exemplary operations performed by the CPU 150 and/or other modules of the display apparatus 100 in providing an intelligent user interface are illustrated below. In this regard, the operations may be implemented via instruction code stored in non-transitory computer readable media 170 that resides within the subsystems configured to cause the respective subsystems to perform the operations illustrated in the figures and discussed herein.
Fig. 2 illustrates exemplary operations for enhancing navigation of video content. The operations of Fig. 2 are better understood with reference to Figs. 3A-3C.
At block 200, the display apparatus 100 may be depicting video content, such as a soccer match, as illustrated in Fig. 3A. The user 130 may then issue a first scene command 305 to the display apparatus 100 to have the display apparatus 100 search for scenes in the video content. For example, the user 130 may simply speak out loud, “show me all the goals. ” In this case, the natural language processor implemented by the CPU 150 alone or in cooperation with the AI processor 165 may determine the meaning of the voice command. In addition or alternatively, data associated with the voice command may be communicated to the support server 127 which may then ascertain the meaning of the voice command and convey the determined meaning back to the display apparatus.
As illustrated in Fig. 3A, in some implementations, the user interface 300 may include a phrase control 310 that is updated in real-time to depict text associated with the commands issued by the user.
At block 205, in response to the first scene command 305, the display apparatus 100 may determine scenes in the video content that are related to a type of scene associated with the first scene command 305. In this regard, the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques that utilize metadata associated with the video content to determine scenes in the video content that are related to the scene type. In addition or alternatively, the first scene command 305 may be communicated to the support server 127 and the support server 127 may determine and convey the scene type to the display apparatus 100.
At block 210, the user interface 300 of the display apparatus 100 may be updated to depict scene images 320 associated with the determined scenes. For example, images 320 from the video content metadata associated with the scenes may be displayed on the user interface 300. The images 320 may correspond to still images and/or a sequence of images or video associated with the scene.
In some implementations, the user interface 300 may be updated to display unique identifiers 325 on or near each image. In some implementations, the unique identifiers 325 are superimposed on part of each image so that the unique identifiers 325 are clearly visible.
At block 215, the user 130 may specify a second scene command that specifies one of the unique identifiers 325. For example, the user 130 may specify “one” to select the scene associated with the first image 320. The unique identifiers correspond to the associated scenes. In some implementations, the unique identifiers take the form of identifiers such as Arabic numerals as shown in Fig. 3A that are easy for the user to say and facilitate the display apparatus 100 itself or servers to recognize. In some implementations, the user may say an Arabic numeral (for example, say 1) as the second scene command. In some implementations, the user may issue the second scene command by pressing a corresponding button (for example, 1) on a controlling device (for example, remote control of the display apparatus) .
At block 220, video content associated with the specified unique identifier 325 (e.g., “one” ) may be displayed on the user interface 300, as illustrated in Fig. 3C.
Returning to block 200, in some implementations, the user 130 may refine a scene command by specifying additional information. For example, in response to receiving the first scene command 305 at block 200, at block 225 one or more potential scene commands 315 related to the first scene command 305 may be determined. The machine learning techniques implemented by the CPU 150, AI processor 165, and/or the support server 127 may be utilized to determine the potential scene commands related to the first scene command 305. In this regard, the metadata in the video content may define a hierarchy of scene commands utilized by the machine learning techniques in determining potential scene commands related to a given first scene command 305.
At block 230, the user interface 300 may be updated to depict one or more of the potential scene commands 315, as illustrated in Fig. 3A. For example, in response to the first scene command 305 “show me all the goals, ” the potential scene commands “in the first half” , “by Real Madrid” , etc. may be determined and depicted.
At block 235, the user 130 may issue one of the potential scene commands 315 to instruct the display apparatus 100 to search for scenes in the video content, as illustrated in Fig. 3B. For example, the user 130 may simply speak out loud, “in the first half. ” The phrase control 310 may be updated in real-time to depict text associated with the first scene command 305 and a third scene command 330.
The operations may repeat from block 205. For example, in response to the third scene command 330, the display apparatus 100 may determine scenes in the video content that are related to a type of scene associated with the first scene command 305 and the third scene command 330. In addition or alternatively, the first scene commands 305 and the third scene command 330 may be conveyed to the support server 127 and the support server 127 may convey information that defines related scenes to the display apparatus.
It should be understood that additional scene commands beyond the first and third scene commands may be specified to facilitate narrowing down desired content. For example, after issuance of the third scene command 330, another group of potential scene commands 315 may be depicted, and so on.
Fig. 4 illustrates exemplary operations that facilitate locating a particular type of video content. The operations of Fig. 4 are better understood with reference to Fig. 5.
At block 400, the display apparatus 100 may be depicting video content, such as a sitcom, as illustrated in Fig. 5. The user 130 may issue a first search command 505 to the display apparatus 100 to have the display apparatus 100 search for a particular type of video content. For example, the user 130 may simply speak out loud, “show. ” In this case, the natural language processor implemented by the CPU 150 alone or in cooperation with the AI processor 165 may determine the meaning of the voice command. In addition or alternatively, data associated with the voice command may be communicated to the support server 127 which may then ascertain the meaning of the voice command and convey the determined meaning back to the display apparatus.
At block 405, the display apparatus 100 may determine video content that is related to the first search command 505. In this regard, the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques that utilize metadata associated with the video content to determine video content that is related to the search command. In addition or alternatively, the first search command 505 may be communicated to the support server 127 and the support server 127 may determine and convey information related to the video content that is in turn related to the first search command to the display apparatus 100.
At block 410, the user interface 500 may be updated to depict controls 520 that facilitate selecting video content. Each control may include a unique identifier 525 on or near the control 520 that facilitates selecting the control by voice. For example, a first control with the unique identifier “one” may correspond to an image that represents an input source of the display apparatus 100 that facilitates selecting video content from the input source. A second control with the unique identifier “two” may correspond to an image of an actor that, when selected, facilitates selecting video content that includes the actor. A fourth control with the unique identifier “four” may correspond to a scene from a movie that the user frequently watches or that is associated with types of shows the user 130 watches.
The machine learning techniques may determine the type of control to display based at least in part on a history of search commands and selections specified by the user that may be stored in the support database 153 of the display apparatus 100 or maintained within the support server 127. In some implementations, the support database 153 is dynamically updated to reflect the user’s choices to improve the relevancy of the controls displayed to the user for subsequent request.
At block 415, the user 130 may specify a second search command that specifies one of the unique identifiers. For example, the user 130 may specify “four” to select the scene associated with the fourth image 520.
At block 420, video content associated with the specified unique identifier (e.g., “four” ) may be depicted on the user interface 500 of the display apparatus 100.
Returning to block 400, in some implementations, the user 130 may refine a search command by specifying additional information. For example, in response to receiving the first search command at block 400, at block 425, one or more potential third search commands 515 related to the first search command 505 may be determined. The machine learning techniques implemented by the CPU 150, AI processor 165, and/or the support server 127 may be utilized to determine the potential commands related to the first search command 505. As noted earlier, the metadata in the video content may include information that facilitates determining whether the video content is associated with a particular type of video content (e.g., comedy, drama, sports, etc. ) . This metadata may be utilized by the machine learning techniques in determining potential third search commands related to a given first search command.
At block 430, the user interface 500 may be updated to depict one or more of the potential search commands 515, as illustrated in Fig. 5. For example, in response to the first scene command “show, ” the potential search commands 515 “games” , “action movies” , etc. may be determined and displayed.
As described earlier, in some implementations, the user interface 500 may include a phrase control 510 that is updated in real-time to depict text associated with the commands issued by the user.
At block 435, the user 130 may issue one of the potential search commands 515 to instruct the display apparatus 100 to search for various types of video content. For example, the user 130 may simply speak out loud, “action movies. ” The phrase control 510 may be updated in real-time to depict text associated with the first search command 505 and the third search command 515 (e.g., “show action movies” ) .
The operations may repeat from block 405. For example, in response to the third search command, the display apparatus 100 may determine video content that is related to the first and third search commands and display appropriate controls for selection by the user.
Fig. 6 illustrates exemplary operations for determining information related to images in video content. The operations of Fig. 6 are better understood with reference to Figs. 7A and 7B.
At block 600, the display apparatus 100 may be depicting video content, such as a movie, as illustrated in Fig. 7A. The user 130 may issue a first query 705 to the display apparatus 100 to have the display apparatus 100 provide information related to the query. For example, the user 130 may simply speak out loud, “who is on screen. ” In this case, the natural language processor implemented by the CPU 150 and/or AI processor 165 may determine the meaning of the voice command. In addition or alternatively, data associated with the voice command may be communicated to the support server 127 which may then ascertain the meaning of the voice command and convey the determined meaning back to the display apparatus 100.
At block 605, in response to the first query 705, the display apparatus 100 may determine one or more objects of the image associated with the query 705. In this regard, the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques that utilize metadata associated with the video content to determine different objects being depicted on the user interface 700 of the display apparatus 100. In addition or alternatively, the first query 705 may be communicated to the support server 127 and the support server 127 may determine and convey information related to different objects depicted on the user interface 700 to the display apparatus 100.
At block 610, the user interface 700 of the display apparatus 100 may be updated to depict controls 720 that facilitate selecting different objects. Each control may include a unique identifier 725 on or near each control 720 that facilitates selecting the control by voice. For example, controls for each actor may be depicted on the user interface 700.
At block 615, the user 130 may select one of the unique identifiers 725. For example, the user 130 may specify “two” to select a particular actor.
At block 620, the user interface 700 may be updated to depict information related to the selection. For example, as illustrated in Fig. 7B, an informational control 730 with information related to the selected actor may be provided.
Returning to block 600, in some implementations, the user 130 may refine a the query by specifying additional information. For example, in response to receiving the first query at block 600, at block 625, one or more potential second queries 715 related to the first query 705 may be determined. The machine learning techniques implemented by the CPU 150 and/or the support server 127 may be utilized to determine the potential queries related to the first query 705. Metadata in the video content may be utilized by the machine learning techniques in determining potential queries related to a given first query.
At block 630, the user interface 700 may be updated to depict one or more of the potential queries 715, as illustrated in Fig. 7A. For example, in response to the first query “who is on screen, ” the potential queries “other movies by john doe” , “where was it filmed” , etc. may be determined and depicted.
As described earlier, in some implementations, the user interface 700 may include a phrase control 710 that is updated in real-time to depict text associated with the queries issued by the user.
At block 635, the user 130 may indicate a second query that corresponds to one of the potential queries 715 to instruct the display apparatus 100 to depict information related to the query. The phrase control 710 may be updated in real-time to depict text associated with the first query 705 and the second query.
At block 640, objects related to the second query may be determined and included with or may replace the objects previously determined. Then the operations may repeat from block 605.
Fig. 8 illustrates alternative exemplary operations for determining information related to images in video content. The operations of Fig. 8 are better understood with reference to Figs. 9A and 9B.
At block 800, the display apparatus 100 may be depicting video content, such as a sitcom, as illustrated in Fig. 9A. The user 130 may issue a command to the display apparatus 100 to pause the video content so that a still image is depicted on the user interface 900.
At block 805, the display apparatus 100 may determine one or more objects of the image. In this regard, the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques that utilize metadata associated with the video content to determine different objects being depicted in the still image. In addition or alternatively, the still image may be communicated to the support server 127 and the support server 127 may determine and convey different objects being depicted in the still image to the display apparatus 100.
At block 810, the user interface of the display apparatus 100 may be updated to depict controls 920 that facilitate selecting different objects, as illustrated in Fig. 9A. For example, controls 920 may be provided for selecting an advertisement related to one of the objects in the still image, to share the video content, to rate the video content, to display information related to one of the objects. Controls 920 for other aspects may be provided.
Each control 920 may include a unique identifier on or near the control 920 that facilitates selecting the control by voice.
At block 815, the user 130 may select one of the unique identifiers. For example, the user 130 may specify the unique identifier associated with a control depicting a handbag that corresponds to a handbag shown in the still image.
At block 820, the user interface 900 may be updated to depict information related to the selection. For example, as illustrated in Fig. 9B, an informational control 925 with information related to the selection may be provided. In one implementation, the informational control 925 may depict a QR code associated with a URL that may be utilized to find out more information related to the selection. The QR code facilitates navigation to the URL by scanning the QR code with an appropriate application on, for example, a mobile device.
Fig. 10 illustrates alternative exemplary operations for automatically pausing video content. The operations of Fig. 10 are better understood with reference to Figs. 11A and 11B.
At block 1000, the display apparatus 100 may determine whether a user is in proximity of the display apparatus 100. For example, in one implementation, the imager 151 of the display apparatus 100 may capture images in front of the display apparatus. The CPU 150 alone or in cooperation with the AI processor 165 may control the imager 151 to capture an image, analyze the captured image to identify face data in the image, and compare the face data with face data associated with the user 130 to determine whether the user 130 is in proximity of the display apparatus. In this regard, face data associated with the user 130 may have been previously captured by the display apparatus 100 during, for example, an initial setup routine. The face data may have been stored to the support database 153.
In another implementation, near field communication circuitry of the display apparatus 100 may be utilized to detect the presence of a device in proximity to the display apparatus, carried by a user 130, that has near field communication capabilities. The device may have been previous registered with the display apparatus 100 as belonging to a particular user. Registration information may be stored to the support database 153.
At block 1005, if the user is determined to not be in proximity of the display apparatus 100, then at block 1010, if the video content is not already paused, the video content may be paused, as illustrated in Fig. 11A. Referring to Fig. 11A, a status control 1105 may be depicted on the user interface 1100 to indicate that the video content has been paused.
In some implementations, the user interface 1100 may depict additional details related to a still image depicted on the user interface 1100 such as the information described above in relation to Figs. 9A and 9B.
If at block 1005, the user 130 is determined to be in proximity of the display, then at block 1015, if the video content is not already resumed, the video content may be resumed, as illustrated in Fig. 11B. Referring to Fig. 11B, the status control 1105 may be updated to indicate that the video content will be resuming.
In some implementations, the display apparatus 100 may perform the operations above even when other users 130 are in proximity of the display apparatus 100. For example, in an initial state, a number of users 130 that includes a primary user 130 may be in proximity of the display apparatus. When the primary user is subsequently determined to not be in proximity of the display apparatus, the video content may be paused, as described above. When the primary user is subsequently determined to be in proximity of the display apparatus, the video content may be resumed.
Fig. 12 illustrates alternative exemplary operations for automatically pausing video content. The operations of Fig. 12 are better understood with reference to Fig. 13A-13D.
At block 1200, the display apparatus 100 may determine whether a user is in proximity of the display apparatus. For example, in one implementation, the imager 151 of the display apparatus 100 may capture images in front of the display apparatus. The CPU 150 alone or in cooperation with the AI processor 165 may control the imager 151 to capture an image, analyze the captured image to identify face data in the image, and compare the face data with face data associated the user to determine whether the user is in proximity of the display apparatus 100. As noted above, face data associated with the user 130 may have been previously captured by the display apparatus 100 during, for example, an initial setup routine.
In another implementation, the presence of the user 130 may be determined based on near field communication circuitry of a device carried the user 130, as described above.
At block 1205, if a user is determined to be in proximity of the display apparatus 100, then one or more program types associated with the user 130 are determined. In this regard, the CPU 150 alone or in cooperation with the AI processor 165 may implement various machine learning techniques to determine program types associated with the user 130. In addition or alternatively, information that identifies the user 130 may be communicated to the support server 127 and the support server 127 may determine program types associated with the user. The machine learning techniques may determine the program types associated with the user 130 by, for example, analyzing a history of programs viewed by the user 130, by receiving information from social media servers 120 related to likes and dislikes of the user, and/or by another manner.
At block 1210, programs that are available for watching at the time of user detection or within a predetermined time later (e.g., 30 minutes) may be determined. For example, metadata associated with available video content may be analyzed to determine whether any of the video content is related to the user associated program types determined above.
At block 1215, the user interface 1300 may be updated to present information 1305 related to available programs that match the user associated program types. The user interfaces 1300 may include controls that facilitate watching one of the available programs, recording the available programs, etc.
In some implementations, a group of users 130 may be detected within proximity of the display apparatus 100 and the program types determined at block 1205 may be based on the intersection of program types associated with two or more of the users 130. The user interface 1300 may be updated to depict information 1305 related to available programs that match the intersection of user associated program types.
In certain implementations, the operations above may be performed spontaneously when a user 130 is detected. For example, a first user 130 may be viewing video content on the display apparatus 100 when a second user comes within proximity of the display apparatus. The operations performed above may occur after detection of the second user.
In other implementations, the operations above may be performed immediately after powering on the display apparatus 100.
In yet other implementation, the operations may be performed after a power off indication has been received. For example, as illustrated in Fig. 13B, the display apparatus 100 may either power up after having been off or may cancel a power off operation, and the user interface 1300 may be updated to depict a minimal amount of information so as not to cause too much of a distraction. For example, the user interface 1300 may merely depict an informational control 1305 to make the user 130 aware of, for example, an upcoming program. A control 1310 may be provided to allow the user 130 to bring the display apparatus 100 into fully powered up condition to facilitate watching the program.
In yet other implementations, one or more information types associated with the user 130 may be determined and the user interface 1300 may be updated to depict information associated with the determined information types. For example, as illustrated in Fig. 13C, the user 130 may have been determined to be interested in knowing the weather. In this case, the display apparatus 100 may be powered up in a minimal power state and an informational control 1305 that displays information related to the weather may be depicted. Or the informational control 1305 may be updated to display information related to an upcoming television episode, as illustrated in Fig. 13D. After a pre-determined time (e.g., 1 minute) the display apparatus 100 may power down.
Fig. 14 illustrates exemplary operations for adjusting various smart appliances based on a detected routine of a user 130. The operations of Fig. 14 are better understood with reference to Fig. 15A-15B.
At block 1400, the display apparatus 100 may receive data that relates the state of various smart appliances 117 and display apparatus 100 usage. For example, data that relates light switches, timers, drapery controllers, and other smart appliances 117 that were previously related to display apparatus 100 usage may be received. In this regard, communication circuitry of the display apparatus 100 may continuously receive state information from smart appliances 117. The support database 153 may store the state information of the smart appliances 117 along with usage information of the display apparatus 100. The CPU 150 may correlate the state information of the smart appliances 117 and the usage information of the display apparatus 100 to form a relation between the state of the smart appliances and the display apparatus usage. The relation may be indicative of a routine that the user 130 follows in watching video content on the display apparatus 100.
The state information may define an activation state of the smart appliance 117. For example, whether a smart light was on, off, or dimmed to a particular setting such as 50%. Other information may include whether smart drapes were closed, partially closed, etc. The usage information may define times of usage of the display apparatus, program types viewed on the display apparatus, lists of specific users of the display apparatus, and specific characteristics of the display apparatus 100 such as volume, contrast, and brightness of the display apparatus, etc.
At block 1405, the display apparatus usage may be determined, and at block 1410, corresponding states for one or more smart appliances 117 may be determined based on the received data. For example, the display apparatus usage may indicate that the display apparatus 100 is set to a movie channel, that the picture mode has been set to a theatre mode and that the display apparatus 100 is being used in the evening on a Friday night. The smart appliance state/display apparatus usage correlation data may indicate that under these conditions, the lights of the room where the display apparatus 100 is located are typically off and that the blinds are closed.
At block 1415, the state of the various smart appliances may be set according to the state determined at block 1410. For example, the CPU 150 may, via the communication circuitry of the display apparatus 100, adjust the various smart appliances 117.
As illustrated in Fig. 15A, the user interface 1500 may include an informational control 1505 to notify the user 130 that a routine was detected. For example, the user interface 1500 may note that the display apparatus 100 is in a “theatre mode” and that a smart bulb is controlled when the display apparatus 100 is in this mode. As illustrated in Fig. 15B, the user interface 1500 may be updated to provided details related to the detected routine such as a name assigned for the routine (e.g., “Movie Time 8PM” ) , a time when the mode “theatre mode” was entered (e.g., 8: 01 PM) and a setting to set the smart appliance to (e.g., 10%) .
Fig. 16 illustrates a computer system 1600 that may form part of or implement the systems, environments, devices, etc., described above. The computer system 1600 may include a set of instructions 1645 that the processor 1605 may execute to cause the computer system 1600 to perform any of the operations described above. The computer system 1600 may operate as a stand-alone device or may be connected, e.g., using a network, to other computer systems or peripheral devices.
In a networked deployment, the computer system 1600 may operate in the capacity of a server or as a client computer in a server-client network environment, or as a peer computer system in a peer-to-peer (or distributed) environment. The computer system 1600 may also be implemented as or incorporated into various devices, such as a personal computer or a mobile device, capable of executing instructions 1645 (sequentially or otherwise) causing a device to perform one or more actions . Further, each of the systems described may include a collection of subsystems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer operations.
The computer system 1600 may include one or more memory devices 1610 communicatively coupled to a bus 1620 for communicating information. In addition, code operable to cause the computer system to perform operations described above may be stored in the memory 1610. The memory 1610 may be a random-access memory, read-only memory, programmable memory, hard disk drive or any other type of memory or storage device.
The computer system 1600 may include a display 1630, such as a liquid crystal display (LCD) , a cathode ray tube (CRT) , or any other display suitable for conveying information. The display 1630 may act as an interface for the user to see processing results produced by processor 1605.
Additionally, the computer system 1600 may include an input device 1625, such as a keyboard or mouse or touchscreen, configured to allow a user to interact with components of system 1600.
The computer system 1600 may also include a disk or optical drive unit 1615. The drive unit 1615 may include a computer-readable medium 1640 in which the instructions 1645 may be stored. The instructions 1645 may reside completely, or at least partially, within the memory 1610 and/or within the processor 1605 during execution by the computer system 1600. The memory 1610 and the processor 1605 also may include computer-readable media as discussed above.
The computer system 1600 may include a communication interface 1635 to support communications via a network 1650. The network 1650 may include wired networks, wireless networks, or combinations thereof. The communication interface 1635 may enable communications via any number of communication standards, such as 802.11, 802.12, 802.20, WiMAX, cellular telephone standards, or other communication standards.
Accordingly, methods and systems described herein may be realized in hardware, software, or a combination of hardware and software. The methods and systems may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements are spread across interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein may be employed.
The methods and systems described herein may also be embedded in a computer program product, which includes all the features enabling the implementation of the operations described herein and which, when loaded in a computer system, is able to carry out these operations. Computer program as used herein refers to an expression, in a machine-executable language, code or notation, of a set of machine-executable instructions intended to cause a device to perform a particular function, either directly or after one or more of a) conversion of a first language, code, or notation to another language, code, or notation; and b) reproduction of a first language, code, or notation.
While methods and systems have been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the claims. Therefore, it is intended that the present methods and systems not be limited to the particular embodiment disclosed, but that the disclosed methods and systems include all embodiments falling within the scope of the appended claims.
Claims (17)
- A display apparatus comprising:user input circuitry for receiving user commands;a display for outputting video content, the video content including metadata, and a user interface;a processor in communication with the user input circuitry and the display; andnon-transitory computer readable media in communication with the processor that stores instruction code, which when executed by the processor, causes the processor to:receive, from the user input circuitry, a first scene command to search for scenes in the video content of a scene type;determine, from the metadata, one or more scenes in the video content related to the scene type; andupdate the user interface to depict one or more scene images related to the one or more scenes related to the scene type.
- The display apparatus according to claim 1, wherein the instruction code causes the processor to:determine from the first scene command one or more potential second scene commands related to the first scene command based on the metadata in the video content;update the user interface to depict one or more of the potential second scene commands;receive, from the user input circuitry, a second scene command to depict video content of a second scene type related to the first scene command and the second scene command;determine, from the metadata, one or more scenes in the video content related to the second scene type;update the user interface to depict one or more scene images related to the one or more scenes related to the second scene type.
- The display apparatus according to claim 1, wherein the instruction code causes the processor to:update the user interface to depict unique identifiers over each of the one or more scene images;receive, from the user input circuitry, a third scene command that specifies one of the unique identifiers;display video content from a scene image associated with the specified unique identifier.
- The display apparatus according to claim 1, wherein the first scene command corresponds to a voice command, wherein the instruction code causes the processor to:implement a natural language processor; anddetermine, via the natural language processor, a meaning of the voice command.
- The display apparatus according to claim 3, wherein the unique identifiers comprise Arabic numerals.
- The display apparatus according to claim 3, wherein a first identifier of the unique identifiers is shown superimposed upon a first scene image of the one or more scene images.
- The display apparatus according to claim 5, wherein the third scene command comprises a voice input from the user.
- The display apparatus according to claim 5, wherein the third scene command comprises an input from the user by a remote control.
- A method for controlling a display apparatus comprising:receiving, via user input circuitry, user commands;outputting, via a display, video content, the video content including metadata, and a user interface;receiving, from the user input circuitry, a first scene command to search for scenes in the video content of a scene type;determining, from the metadata, one or more scenes in the video content related to the scene type; andupdating the user interface to depict one or more scene images related to the one or more scenes related to the scene type.
- The method according to claim 9, further comprising:determining from the first scene command one or more potential second scene commands related to the first scene command based on the metadata in the video content;updating the user interface to depict one or more of the potential second scene commands;receiving, from the user input circuitry, a second scene command to depict video content of a second scene type related to the first scene command and the second scene command;determining, from the metadata, one or more scenes in the video content related to the second scene type;updating the user interface to depict one or more scene images related to the one or more scenes related to the second scene type.
- The method according to claim 9, further comprising:updating the user interface to depict unique identifiers over each of the one or more scene images;receiving, from the user input circuitry, a third scene command that specifies one of the unique identifiers;displaying video content from a scene image associated with the specified unique identifier.
- The method according to claim 9, wherein the first scene command corresponds to a voice command, the method further comprises:implementing a natural language processor; anddetermining, via the natural language processor, a meaning of the voice command.
- The method according to claim 11, wherein the unique identifiers comprise Arabic numerals.
- The method according to claim 11, wherein a first identifier of the unique identifiers is shown superimposed upon a first scene image of the one or more scene images.
- The method according to claim 13, wherein the third scene command comprises a voice input from the user.
- The method according to claim 13, wherein the third scene command comprises an input from the user by a remote control.
- A non-transitory computer readable media that stores instruction code for controlling a display apparatus, the instruction code being executable by a computer for causing the computer to:receive, from user input circuitry of the computer, a first scene command to search for scenes in video content of a scene type;determine, from metadata of the video content, one or more scenes in the video content related to the scene type; andupdate a user interface of the computer to depict one or more scene images related to the one or more scenes related to the scene type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980000619.3A CN110741652A (en) | 2018-05-21 | 2019-05-08 | Display device with intelligent user interface |
Applications Claiming Priority (14)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201815985273A | 2018-05-21 | 2018-05-21 | |
US15/985,338 US20190356952A1 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,292 | 2018-05-21 | ||
US15/985,251 | 2018-05-21 | ||
US15/985,325 | 2018-05-21 | ||
US15/985,206 | 2018-05-21 | ||
US15/985,325 US10965985B2 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,251 US11507619B2 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,303 US20190356951A1 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,303 | 2018-05-21 | ||
US15/985,273 | 2018-05-21 | ||
US15/985,292 US20190354603A1 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,206 US20190354608A1 (en) | 2018-05-21 | 2018-05-21 | Display apparatus with intelligent user interface |
US15/985,338 | 2018-05-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019223536A1 true WO2019223536A1 (en) | 2019-11-28 |
Family
ID=68615946
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/086009 WO2019223536A1 (en) | 2018-05-21 | 2019-05-08 | Display apparatus with intelligent user interface |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110741652A (en) |
WO (1) | WO2019223536A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021169983A1 (en) * | 2020-02-24 | 2021-09-02 | Qingdao Haier Smart Technology R&D Co., Ltd. | Consumer appliance inheritance methods and systems |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114464180B (en) * | 2022-02-21 | 2025-01-21 | 海信电子科技(武汉)有限公司 | Intelligent device and intelligent voice interaction method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010183301A (en) * | 2009-02-04 | 2010-08-19 | Sony Corp | Video processing device, video processing method, and program |
CN102263907A (en) * | 2011-08-04 | 2011-11-30 | 央视国际网络有限公司 | Play control method of competition video, and generation method and device for clip information of competition video |
US20150382079A1 (en) * | 2014-06-30 | 2015-12-31 | Apple Inc. | Real-time digital assistant knowledge updates |
CN105912560A (en) * | 2015-02-24 | 2016-08-31 | 泽普实验室公司 | Detect sports video highlights based on voice recognition |
CN107801106A (en) * | 2017-10-24 | 2018-03-13 | 维沃移动通信有限公司 | A kind of video segment intercept method and electronic equipment |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8577911B1 (en) * | 2010-03-23 | 2013-11-05 | Google Inc. | Presenting search term refinements |
US9596515B2 (en) * | 2012-01-04 | 2017-03-14 | Google Inc. | Systems and methods of image searching |
CN103000173B (en) * | 2012-12-11 | 2015-06-17 | 优视科技有限公司 | Voice interaction method and device |
US9183261B2 (en) * | 2012-12-28 | 2015-11-10 | Shutterstock, Inc. | Lexicon based systems and methods for intelligent media search |
CN103077165A (en) * | 2012-12-31 | 2013-05-01 | 威盛电子股份有限公司 | Natural Language Dialogue Method and System |
KR20150122510A (en) * | 2014-04-23 | 2015-11-02 | 엘지전자 주식회사 | Image display device and control method thereof |
GB201501510D0 (en) * | 2015-01-29 | 2015-03-18 | Apical Ltd | System |
US10331312B2 (en) * | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
US10606887B2 (en) * | 2016-09-23 | 2020-03-31 | Adobe Inc. | Providing relevant video scenes in response to a video search query |
CN106851407A (en) * | 2017-01-24 | 2017-06-13 | 维沃移动通信有限公司 | A method and terminal for controlling video playback progress |
CN107833574B (en) * | 2017-11-16 | 2021-08-24 | 百度在线网络技术(北京)有限公司 | Method and apparatus for providing voice service |
CN108055589B (en) * | 2017-12-20 | 2021-04-06 | 聚好看科技股份有限公司 | Intelligent television |
-
2019
- 2019-05-08 CN CN201980000619.3A patent/CN110741652A/en active Pending
- 2019-05-08 WO PCT/CN2019/086009 patent/WO2019223536A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010183301A (en) * | 2009-02-04 | 2010-08-19 | Sony Corp | Video processing device, video processing method, and program |
CN102263907A (en) * | 2011-08-04 | 2011-11-30 | 央视国际网络有限公司 | Play control method of competition video, and generation method and device for clip information of competition video |
US20150382079A1 (en) * | 2014-06-30 | 2015-12-31 | Apple Inc. | Real-time digital assistant knowledge updates |
CN105912560A (en) * | 2015-02-24 | 2016-08-31 | 泽普实验室公司 | Detect sports video highlights based on voice recognition |
CN107801106A (en) * | 2017-10-24 | 2018-03-13 | 维沃移动通信有限公司 | A kind of video segment intercept method and electronic equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021169983A1 (en) * | 2020-02-24 | 2021-09-02 | Qingdao Haier Smart Technology R&D Co., Ltd. | Consumer appliance inheritance methods and systems |
Also Published As
Publication number | Publication date |
---|---|
CN110741652A (en) | 2020-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11706489B2 (en) | Display apparatus with intelligent user interface | |
US12271421B2 (en) | Display apparatus with intelligent user interface | |
US20190354608A1 (en) | Display apparatus with intelligent user interface | |
US20190354603A1 (en) | Display apparatus with intelligent user interface | |
CN114666650B (en) | Identifying and controlling smart devices | |
US20210365162A1 (en) | Intelligent content queuing from a secondary device | |
US20190356952A1 (en) | Display apparatus with intelligent user interface | |
KR101873364B1 (en) | Broadcast signal receiver and method for providing broadcast signal relation information | |
US9538251B2 (en) | Systems and methods for automatically enabling subtitles based on user activity | |
US20190356951A1 (en) | Display apparatus with intelligent user interface | |
CN105578229A (en) | Electronic equipment control method and device | |
KR102176385B1 (en) | Providing correlated programming information for broadcast media content and streaming media content | |
WO2019223536A1 (en) | Display apparatus with intelligent user interface | |
US20150382064A1 (en) | Systems and methods for automatically setting up user preferences for enabling subtitles | |
CN106325667A (en) | Method and device for quickly locating target object | |
EP3542246B1 (en) | Streaming content based on skip histories | |
WO2022017018A1 (en) | Display device, server, and video recommending method | |
CN104780394A (en) | Video remote-playing habit learning method and system for handheld devices and televisions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19806828 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19806828 Country of ref document: EP Kind code of ref document: A1 |