[go: up one dir, main page]

WO2017120300A1 - Systèmes et procédés de distribution de contenu - Google Patents

Systèmes et procédés de distribution de contenu Download PDF

Info

Publication number
WO2017120300A1
WO2017120300A1 PCT/US2017/012284 US2017012284W WO2017120300A1 WO 2017120300 A1 WO2017120300 A1 WO 2017120300A1 US 2017012284 W US2017012284 W US 2017012284W WO 2017120300 A1 WO2017120300 A1 WO 2017120300A1
Authority
WO
WIPO (PCT)
Prior art keywords
different images
content
user
displayed
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2017/012284
Other languages
English (en)
Inventor
Eric Chen
David Coleman
Charles W. K. Gritton
David Karlin
Daniel Simpkins
Seth Sternberg
Peter Wood
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hillcrest Laboratories Inc
Original Assignee
Hillcrest Laboratories Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hillcrest Laboratories Inc filed Critical Hillcrest Laboratories Inc
Publication of WO2017120300A1 publication Critical patent/WO2017120300A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4821End-user interface for program selection using a grid, e.g. sorted out by channel and broadcast time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • H04N21/42206User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor characterized by hardware details
    • H04N21/42222Additional components integrated in the remote control device, e.g. timer, speaker, sensors for detecting position, direction or movement of the remote control, microphone or battery charging device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Definitions

  • This application describes, among other things, a method and system for dynamically displaying, discovering, scanning and interacting with content across a wide variety of platforms.
  • the television was tuned to the desired channel by adjusting a tuner knob and the viewer watched the selected program. Later, remote control devices were introduced that permitted viewers to tune the television from a distance. This addition to the user-television interface created the phenomenon known as "channel surfing" whereby a viewer could rapidly view short segments being broadcast on a number of channels to quickly learn what programs were available at any given time.
  • buttons that can be programmed with the expert commands. These soft buttons sometimes have accompanying LCD displays to indicate their action. These too have the flaw that they are difficult to use without looking away from the TV to the remote control. Yet another flaw in these remote units is the use of modes in an attempt to reduce the number of buttons.
  • moded a special button exists to select whether the remote should communicate with the TV, DVD player, cable set-top box, VCR, etc. This causes many usability issues including sending commands to the wrong device, forcing the user to look at the remote to make sure that it is in the right mode, and it does not provide any simplification to the integration of multiple devices.
  • the most advanced of these universal remote units provide some integration by allowing the user to program sequences of commands to multiple devices into the remote. This is such a difficult task that many users hire professional installers to program their universal remote units.
  • remote devices usable to interact with such frameworks, as well as other applications, systems and methods for these remote devices for interacting with such frameworks.
  • various different types of remote devices can be used with such frameworks including, for example, trackballs, "mouse" -type pointing devices, light pens, etc.
  • 3D pointing devices with scroll wheels.
  • 3D pointing is used in this specification to refer to the ability of an input device to move in three (or more) dimensions in the air in front of, e.g., a display screen, and the corresponding ability of the user interface to translate those motions directly into user interface commands, e.g., movement of a cursor on the display screen.
  • the transfer of data between the 3D pointing device may be performed wirelessly or via a wire connecting the 3D pointing device to another device.
  • “3D pointing” differs from, e.g., conventional computer mouse pointing techniques which use a surface, e.g., a desk surface or mousepad, as a proxy surface from which relative movement of the mouse is translated into cursor movement on the computer display screen.
  • An example of a 3D pointing device can be found in U.S. Patent Application No. 1 1/1 19,663, the disclosure of which is incorporated here by reference.
  • Systems and methods according to the present invention describe dynamically discovering and displaying content, represented by a plurality of different images on a graphical user interface. Based on the interaction of the user, whether an explicit interaction or no interaction at all, content is updated and displayed to a user for further manipulation.
  • the plurality of different images on the graphical user interface; receiving an input from the user; determining that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; selecting another one of the plurality of different images; and displaying the content represented by the another one of the plurality of different images.
  • a method for dynamically displaying content to a user on a graphical user interface displayed on a device, wherein the content is represented by a plurality of different images comprising: displaying the plurality of different images on the graphical user interface; dynamically updating the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images; receiving input via at least one sensor in a 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface displayed on the device; determining, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display
  • a system for dynamically displaying content to a user, comprising: a 3D pointing device; a device configured to display a graphical user interface; a processor associated with the device and configured to receive inputs for dynamically displaying the content, wherein the content is represented by a plurality of different images, the processor configured to: display the plurality of different images on the graphical user interface; dynamically update the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images (where all of the images need not necessarily change simultaneously); receive input via at least one sensor in the 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface of the device; determine, based at least in part on a current position of the cursor, that the user has an interest in one of the pluralit
  • the dynamically update the displayed content on the graphical user interface of the device to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; select another one of the plurality of different images; and display the content represented by the another one of the plurality of different images.
  • FIG. 1 depicts a conventional remote control unit for media system
  • FIG. 2 depicts an exemplary media system in which exemplary
  • FIGS. 3A and 3B show a 3D pointing device according to an exemplary embodiment of the present invention
  • FIG. 4 depicts another exemplary 3D pointing device
  • FIG. 5 illustrates a user employing a 3D pointing device to provide input to a user interface on a television according to an exemplary embodiment of the present invention
  • FIG. 6 depicts an initial user interface displaying a plurality of different images
  • FIGS. 7 A and 7B are examples of a Snap or Snapshot visualization of a content item
  • FIG. 8 depicts another user interface displaying a plurality of different images
  • FIG. 9 depicts another user interface displaying a plurality of different images
  • FIG. 10 depicts a further user interface displaying a plurality of different images
  • FIG. 1 1 depicts a further user interface displaying a plurality of different images
  • FIG. 12 depicts a further user interface displaying a plurality of different images in the watch list
  • FIG. 13 depicts a further user interface displaying additional detail regarding one of the plurality of different images
  • FIG. 14 depicts a further user interface displaying a plurality of different images
  • FIG. 15 depicts a method for dynamically displaying and updating content according to one of the embodiments herein;
  • FIG. 16 depicts a method for dynamically displaying and updating content according to another one of the embodiments herein;
  • FIG. 17 depicts a method for dynamically displaying and updating content according to another one of the embodiments herein;
  • FIG. 18 depicts a content delivery system in which exemplary
  • FIG. 19 depicts a brief overview of the method
  • an exemplary aggregated media system 200 in which the present invention can be implemented will first be described with respect to Figure 2. Those skilled in the art will appreciate, however, that the present invention is not restricted to implementation in this type of media system and that more or fewer components can be included therein.
  • an input/output (I/O) bus 210 connects the system components in the media system 200 together.
  • the I/O bus 210 represents any of a number of different of mechanisms and techniques for routing signals between the media system components.
  • the I/O bus 210 may include an appropriate number of independent audio "patch" cables that route audio signals, coaxial cables that route video signals, two-wire serial lines or infrared or radio frequency transceivers that route control signals, optical fiber or any other routing mechanisms that route other types of signals.
  • the media system 200 includes a television/monitor 212, a video cassette recorder (VCR) 214, digital video disk (DVD) recorder/playback device 216, audio/video tuner 218 and compact disk player 220 coupled to the I/O bus 210.
  • the VCR 214, DVD 216 and compact disk player 220 may be single disk or single cassette devices, or alternatively may be multiple disk or multiple cassette devices. They may be independent units or integrated together.
  • the media system 200 includes a microphone/speaker system 222, video camera 224 and a wireless I/O control device 226. According to exemplary
  • the wireless I/O control device 226 is a 3D pointing device.
  • the wireless I/O control device 226 can communicate with the media system 200 using, e.g., an IR or RF transmitter or transceiver. Alternatively, the I/O control device can be connected to the media system 200 via a wire.
  • the media system 200 also includes a system controller 228.
  • the system controller 228 operates to store and display media system data available from a plurality of media system data sources and to control a wide variety of features associated with each of the system components. As shown in Figure 2, system controller 228 is coupled, either directly or indirectly, to each of the system components, as necessary, through I/O bus 210.
  • system controller 228 is configured with a wireless communication transmitter (or transceiver), which is capable of communicating with the system components via IR signals or RF signals. Regardless of the control medium, the system controller 228 is configured to control the media components of the media system 200 via a graphical user interface described below.
  • media system 200 may be configured to receive media items from various media sources and service providers.
  • media system 200 receives media input from and, optionally, sends information to, any or all of the following sources: cable broadcast 230, satellite broadcast 232 (e.g., via a satellite dish), very high frequency (VHF) or ultra-high frequency (UHF) radio frequency communication of the broadcast television networks 234 (e.g., via an aerial antenna), telephone network 236 and cable modem 238 (or another source of Internet content).
  • the media system 200 may be an entertainment system.
  • the media components and media sources illustrated and described with respect to Figure 2 are purely exemplary and that media system 200 may include more or fewer of both.
  • other types of inputs to the system include AM/FM radio and satellite radio.
  • remote devices and interaction techniques between remote devices and user interfaces in accordance with the present invention can be used in conjunction with other types of systems, for example computer systems including, e.g., a display, a processor and a memory system or with various other systems and applications.
  • remote devices which operate as 3D pointers are of particular interest for the present specification, although the present invention is not limited to systems including 3D pointers.
  • Such devices enable the translation of movement of the device, e.g., linear movement, rotational movement, acceleration or any combination thereof, into commands to a user interface.
  • Remote devices which operate as 3D pointers are examples of motion sensing devices which enable the translation of movement, e.g., pointing or gestures, into commands to a user interface.
  • An exemplary 3D pointing device 300 is depicted in Figures 3A-3B.
  • user movement of the 3D pointing can be defined, for example, in terms of a combination of x-axis attitude (roll), y-axis elevation (pitch) and/or z-axis heading (yaw) motion of the 3D pointing device 300.
  • the 3D pointing device 300 includes two buttons 302 and 304 as well as a scroll wheel 306, although other physical configurations are possible.
  • 3D pointing device 300 can be held by a user in front of a display 308 and motion of the 3D pointing device 300 will be sensed by sensors inside the device 300 (described below with respect to Figure 3B) and translated by the 3D pointing device 300 into output which is usable to interact with the information displayed on display 308, e.g., to move the cursor 310 on the display 308.
  • rotation of the 3D pointing device 300 about the y-axis can be sensed by the 3D pointing device 300 and translated into an output usable by the system to move cursor 310 along the V2 axis of the display 308.
  • rotation of the 3D pointing device 308 about the z-axis can be sensed by the 3D pointing device 300 and translated into an output usable by the system to move cursor 310 along the X2 axis of the display 308.
  • gyroscopes e.g., gyroscopes, angular rotation sensors, accelerometers, magnetometers, etc. It will be appreciated by those skilled in the art that one or more of each or some of these sensors can be employed within device 300. According to one purely illustrative example, two rotational sensors 320 and 322 and one accelerometer 324 can be employed as sensors in 3D pointing device 300 as shown in Figure 3B. Although this example employs inertial sensors, it will be appreciated that other motion sensing devices and systems are not so limited, and examples of other types of sensors are mentioned above.
  • the rotational sensors 320, 322 can be 1 -D, 2-D or 3-D sensors.
  • the accelerometer 324 can, for example, be a 3-axis linear accelerometer, although a 2-axis linear accelerometer could be used by assuming that the device is measuring gravity and mathematically computing the remaining third value. Additionally, the accelerometer(s) and rotational sensor(s) could be packaged together into a single sensor package. Other variations of sensors and sensor packages may also be used in conjunction with these examples.
  • a handheld motion sensing device is not limited to the industrial design illustrated in Figures 3A and 3B, but can instead be deployed in any industrial form factor, another example of which is illustrated as Figure 4.
  • the 3D pointing device 400 includes a ring-shaped housing 401 , two buttons 402, and 404 as well as a scroll wheel 406 and grip 407, although other exemplary embodiments may include other physical configurations.
  • the region 408 which includes the two buttons 402 and 404 and scroll wheel 406 is referred to herein as the "control area" 408, which is disposed on an outer portion of the ring-shaped housing 401 . More details regarding this exemplary handheld motion sensing device can be found in U.S. Patent Application Serial No.
  • the handheld motion sensing device may also include one or more audio sensing devices, e.g., microphone 410.
  • Such motion sensing devices 300, 400 have numerous applications including, for example, usage in the so-called "10 foot” interface between a sofa and a television in the typical living room as shown in Figure 5.
  • the 3D pointing device 400 moves between different positions, that movement is detected by one or more sensors within 3D pointing device 400 and transmitted to the television 520 (or associated system component, e.g., a set-top box (not shown)). Movement of the 3D pointing device 400 can, for example, be translated into movement of a cursor 540 displayed on the television 520 and which is used to interact with a user interface, e.g., the Peak Content Delivery Service.
  • the television 520 can also include one or more microphones (two of which 544 and 546 are illustrated in Figure 5).
  • input can be provided to the user interface via gesture input, tremor input, voice input, touch input, stylus input, eye tracking input, facial recognition, and user and/or device context, for example.
  • the input device can be worn by the user.
  • the user interface could be on a television, a computer, a tablet, a cell phone, a device worn by the user, an Augmented Reality or Virtual Reality system, or any other type of computing device or handheld device.
  • the user interface is on a handheld device or a device worn by the user, for example, the user could provide input by moving the handheld device.
  • the embodiments described herein include, but are not limited to, a content selection input device and content delivery output device which are physically separated from one another.
  • 3D pointing device 300 can be used to interact with the display 308 in a number of ways other than (or in addition to) cursor movement, for example it can control cursor fading, volume or media transport (play, pause, fast-forward and rewind). For example, pressing the scroll wheel 306 (the scroll wheel also operating in this case as a switch), could cause the device to switch from one mode to another.
  • the device could cause the content to play or pause.
  • moving the scroll wheel could allow fast- forwarding or rewinding of the content displayed on the Ul.
  • the system can be programmed to recognize gestures, e.g., predetermined movement patterns, to convey commands in addition to cursor movement.
  • other input commands e.g., a zoom-in or zoom-out on a particular region of a display (e.g., actuated by pressing button 302 to zoom-in or button 304 to zoom-out or by using the scroll wheel 306), may also be available to the user.
  • the user may use the scroll wheel on the 3D pointer device in a scrolling mode.
  • the cursor When operating in scrolling mode, the cursor can be displayed in a default representation, e.g., as an arrow on the user interface. While in scroll mode, rotation of the scroll wheel on the 3D pointing device (or other pointing device if a 3D pointer is not used) has the effect of scrolling the content which is currently being viewed by the user vertically, i.e., up and down.
  • the GUI screen (also referred to herein as a "Ul view”, which terms refer to a currently displayed set of Ul objects) seen on television 520 is a home view.
  • the home view displays a plurality of applications 522, e.g., "Photos", “Music”, “Recorded”, “Guide”, “Live TV”, “On Demand”, and “Settings”, which are selectable by the user by way of interaction with the user interface via the 3D pointing device 400.
  • Such user interactions can include, for example, pointing, scrolling, clicking or various combinations thereof.
  • exemplary pointing, scrolling and clicking interactions which can be used in conjunction with exemplary embodiments of the present invention, the interested reader is directed to U.S.
  • Figure 5 illustrates various icons for accessing content
  • the method for accessing content as described below could be implemented by selecting any icon or logging into a system to display the initial view.
  • other forms of input as discussed above, could be used to display a certain Ul view, e.g., gestures, voice recognition, etc.
  • user interfaces may use, at least in part, zooming techniques for moving between user interface views.
  • the next "highest" user interface view could be reached by actuating an object on the Ul view which is one zoom level higher than the currently displayed Ul view.
  • zooming and/or panning could be implemented by moving the scroll wheel 306.
  • the zooming transition effect can be performed by progressive scaling and displaying of at least some of the Ul objects displayed on the current Ul view to provide a visual impression of movement of those Ul objects away from an observer.
  • user interfaces may zoom-in in response to user interaction with the user interface which will, likewise, result in the progressive scaling and display of Ul objects that provide the visual impression of movement toward an observer. More information relating to zoomable user interfaces can be found in U.S. Patent Application Serial No. 10/768,432, filed on January 30, 2004, entitled “A Control Framework with a Zoomable Graphical User Interface for Organizing, Selecting and Launching Media Items," and U.S. Patent Application Serial No. 09/829,263, filed on April 9, 2001 , entitled “Interactive Content Guide for Television Programming,” the disclosures of which are incorporated here by reference.
  • Movement within the user interface between different user interface views is not limited to zooming.
  • Other non-zooming techniques can be used to transition between user interface views.
  • panning can be performed by progressive translation and display of at least some of the user interface objects which are currently displayed in a user interface view. This provides the visual impression of lateral movement of those user interface objects to an observer.
  • PeakTM as a verb
  • PeakTM can be defined as a semantic merge of the noun “peak” meaning “mountain top” and the homophone verb “peek” meaning to see.
  • PeakTM as a noun
  • PeakTM can be defined as the view of a collection of content from a particular semantic vantage point. For example, if the semantic vantage point (PeakTM) is "1980's Drama Movies," peaking that content will lead to a stream of cover art and related metadata organized across a viewing screen in a pleasing way.
  • the content category could also be music-related in which case the art would be music album covers or something else related to items in that category.
  • PeakTM is “I could go on an ad for a movie (e.g., "Captivating! - USA Today).
  • An example usage of PeakTM is "I could go on an ad for a movie so I peaked it.”
  • Another example is "I was in the mood for a suspense movie so I peaked for one.” Peaking is different from searching because searching helps you find content, but Peaking helps find the content for you.
  • PeakTM is a content discovery application and service for multiple platforms including smart TV's, PC's, mobile phones and the like. PeakTM allows the user to discover new content of various types including video, audio, entertainment destinations (e.g., restaurants, theaters). PeakTM's user interface shows images of content on the screen, which vary over time. The images remain on the screen for several seconds and then disappear unless the user interacts with them. There is no static grid that splits the screen, but rather, the grid is dynamic where the size and shape of new images that appear vary over time. The content that populates the image rectangles is coming from a data base that is either local or online. Each image can show, for example, the title of the content and a rating. When the cursor is hovering over an image that title remains still and does not disappear like the
  • Figure 6 illustrates an initial explore screen 600 displaying content represented by a plurality different images 602.
  • the initial screen 600 can display a default semantic PeakTM, such as "all" or the user can set the initial screen to display content based on the user's personal preferences and/or settings, the user's viewed content history, or potentially all history of other users of the system.
  • the images 602 could be random images. If the user shows no interest in any of the images 602, then images 602 are replaced with new and different images representing different content is automatically displayed to the user in another Ul view. The images 602 can all be replaced with new and different images at the same time.
  • each of the images 602 can be replaced with new and different images at different intervals, so that one image is replaced at a time with a new image and content.
  • the viewer has more time to contemplate the content on the screen.
  • This cycle of new Ul views with new content and images continues until the user indicates an interest in any of the images or content.
  • the new Ul views can be cycled to update every few or several seconds automatically.
  • the timing of the updates to new Ul views could be determined based on a user's preferences and/or settings, or learned by the system based on the user's past browsing history/usage. Further, the user could intentionally pick a different semantic PeakTM. In any event, once a semantic PeakTM is selected, it is remembered in the user's personal list.
  • the system may remember the personal history for each user, such as what the user liked and did not like, what content metadata they looked at more, what options the user preferred, etc. The system may accomplish this by determining relevant content, the metadata for that content, and presentation rules, etc. In addition, the system allows for the creation and publishing of a semantic PeakTM to others. As shown in Fig. 6, each of the plurality of images 602 could be displayed indicating the title 604 of the content, as well as indicating a rating 606 of the content. In this example of Fig. 6, each of the plurality of images 602 could display the movie's cover art or a scene or character in the program.
  • each of the plurality of images 602 may be displayed in a (generally) rectangular or square shape, where each of the shapes can be of different sizes and at different screen locations.
  • rectangular and square shapes are illustrated in Fig. 6, the shapes could be any shape or combination of shapes, e.g., a teardrop and/or a circle, or pyramid shape.
  • a dynamic screen layout one can display oversized visuals, like having a single movie poster take up one- third or even one-half of the screen 600. Since the images and/or content can cycle out and be replaced by images of varying sizes, the user will be getting the benefit of a large and visually arresting display.
  • Fig. 6, and the other content views discussed below represent programs such as movies or shows, the content could also be advertisements, documents, music, photos, games, recipes, books, travel, online dating, restaurants, shopping, theatre tickets, local events, social media, or job listings, for example.
  • the content represented on the display could be related to more than one type of content.
  • content and images representing movies, theatre tickets, and advertisements could be displayed simultaneously.
  • Snapshots or Snaps For each display mechanism such as PeakTM and content type, a concise visual display of a subset of relevant metadata needs to be constructed. The template shape is then constructed (possibly dynamically as required) whenever that particular content item needs to be shown on the screen.
  • An example of a Snap for a restaurant is shown in Figure 7A.
  • An example of a Snap for a movie is shown in Figure 7B.
  • the Snap allows a coherent presentation of the most relevant information about a particular content item so that the user can instinctively browse across several relevant information facets in parallel. Note that while the particular embodiments in this patent involve using a single Snap construct per content item display, the designer could easily decide to selectively use one of many Snap constructs or even use one that morphs over time in a single display so as to display additional relevant information.
  • both the layout and the content displayed on the screen autonomously changes as the user watches.
  • each particular collection/layout displayed on the Ul stays stable for a few to several seconds, before changing and updating to new content and images.
  • the user or the system could choose to generally select the older pieces of content to change out at a given instant in time, or can just randomly choose the content to change in order to make the display more visually interesting.
  • the view presented in Fig 6 could change to new content presented in Fig. 8.
  • the content has dynamically updated from that of Fig. 6 to present new content in a new Ul view 800.
  • each of the plurality of images 810 have changed to different sizes, different locations, and different images.
  • the transition from the Ul screen shown in Fig. 6 to that of Fig. 8 could happen gradually, i.e., by individually cycling through (replacing) individual images.
  • the user has moved the 3D pointing device to hover over image 802 representing "Jurassic World.” Once the user has hovered over one of the plurality of different images, the image could relay additional features to the user.
  • This border could be of different colors, for example.
  • the image 802 could be enlarged relative to its original displayed size to visually convey to the user that the user is hovering over the image for possible further input.
  • Other features could be
  • image 802 stand out visually as one in which the user may be interested.
  • the user has random access to any part of the Ul, corners or edges of the image 802, for example, could be linked to additional features.
  • the image 802 will then display icons 806, 808.
  • One such feature is an icon 806 (PeakTM, for example) on the corner of the content image 802, that if selected will navigate the view to another Ul screen that displays additional content represented by a plurality of images, which are similar to the image 802 originally selected.
  • just hovering over the image 802 could indicate interest wherein the images 810 are automatically replaced with content related to image 802.
  • FIG. 8 displays an image 802 with two icons 806, 808, the image 802 could display additional or different icons for selection by the user for additional features. Alternatively, the image 802 could display no icons, and instead various input, such as selecting a button on the input device, could provide access and selection of additional features.
  • the Ul display 900 has been automatically updated to present new content represented by a plurality of different images 908.
  • image 902 of Fig. 9 is highlighted to indicate the user's interest in "The Hunger Games: Mockingjay - Part 1 ,” for example.
  • a cursor 904 is displayed, where movement of the 3D pointing device corresponds to movement of the cursor 904.
  • the cursor 904 is moved over the icon 906 represented by a flag. The user can select the icon 906 by pressing a button on the 3D pointing device, for example, to add this program to the user's short list or watch list.
  • a new Ul view 1000 is displayed. As shown in Fig. 10, because the user has highlighted image 802 "Jurassic World,” by hovering over the image 802, for example, in Fig. 8, the highlighted image 802 "Jurassic World” remains on the screen 1000, while the remaining images 810 of Fig. 8, for example, have updated to reflect new and different images 1004 in Fig. 10. Also, in this Ul view 1000, a cursor 1006 is displayed, where movement of the 3D pointing device corresponds to
  • the cursor 1006 is moved over the icon 1008 represented by the PeakTM symbol.
  • the user can select the icon 1008 by pressing a button on the 3D pointing device, for example, to add this program to the user's PeakTM list.
  • Fig. 1 1 illustrates another display view 1 100 of the Peak screen, where an icon 1 102 (PeakTM icon, for example) is displayed differently than that of Fig. 10, for example.
  • icon 1 102 PeakTM icon, for example
  • the content is dynamically updated to display content represented by different images 1 104, wherein the content is similar to "The Man from U.N. CLE.," for example.
  • the related content 1 104 could be presented based on similar movies in this example, the related content that is
  • Fig. 1 1 Another feature illustrated in Fig. 1 1 is the number of items that are listed in the short list or watch list, as designated by the number next to the icon 1 106 represented by a flag.
  • FIG. 12 Selection of the icon 1 106 represented by a flag on Fig. 1 1 , for example, results in a new Ul view 1200 of Fig. 12, where the icon 1 106 may remain in the new Ul view 1200.
  • the eight images 1202 representing programs are displayed in the short list or watch list view 1200.
  • the images 1202 are displayed in a grid format where each image is displayed in equal size, the images 1202 can be displayed in different sizes or shapes and at different locations as discussed above. Further, selection of an image could be accomplished by moving the cursor 1204 over any of the images 1202 and pressing a button, for example, on the 3D pointing device.
  • Fig. 13 displays additional details regarding the content, such as the title, date, user rating, parental rating, content time length, and a brief description of the content.
  • this Ul view 1300 could include an icon 1302 for playing the content.
  • Fig. 13 displays certain details, as well as an icon 1302 for playing the content, the information displayed could list different or additional details and/or additional or different icons.
  • a user is not required to access this U l view 1300 to play the desired content. Instead, the user could select the content represented by an image of any of the Ul views of Figs. 6 and 8-12 by pressing a button on the 3D pointing device, for example.
  • the dynamic content method could also be implemented using a grid display where a user manipulates up, down, left, and right buttons or via other input to move from one part of the grid to another.
  • this dynamic content method could be implemented in a text-based system. For example, a user enters a text-based search in a search engine on the Internet. However, the terms the user is entering do not succinctly match the terms related to the desired search results. In this example, the user enters "classic movies," and the search results displays a variety of different types of classic movies, such as those from the 1970's, those from the 1940's, film noir movies, and black and white movies.
  • Each content item e.g., a movie or a restaurant
  • the initial step (Part 1 ) of an algorithm to implement the method is to take the mixed data attributes and produce normalized metric data facets, i.e., facet generation. Those facets are then used in Part 2 (content item selection) to drive the actual selection process, e.g., of the content images which are displayed on the Ul, and cycled through, as described previously.
  • Part 2 content item selection
  • This section will describe how that process works for each common data type, based on one-to-one mappings.
  • the method could further include many-to-one mapping. There are likely some useful facets that are inherently formed from several attributes at once. There is nothing inherent in this architecture that prohibits that or makes that unwieldy.
  • the desired output range is 0 to 1 .
  • the perspective mapping function in this case is: For ordinal data, the data is by rank. Assuming that the rank has a meaning in the sense that the third element is more similar to the first element than to the tenth element and that there is always at least one element that is first in the list (has value 1 indicating that it is first), the mapping is again fairly straight-forward. In the perspective mapping proposed here, the facet value of 1 is assigned to the top rank item and 0 to the bottom rank item. The perspective mapping then is:
  • perspective mappings required are more complex. Furthermore, multiple perspectives might be meaningful for any given attribute. The following are some examples of perspective mappings based on categorical data.
  • One category could be the year a program was released.
  • the year a film was made looks like a metric (and technically is one) but, from a cognitive perspective, it behaves more like a category.
  • the real attribute of interest is whether or not a film is "modern” or "classic” or “early,” for example.
  • the perspective mappings for those attributes could be as follows:
  • Another category could be the genre of the program.
  • Film genre is clearly categorical and is not metric at all. In fact, by itself and with no preconceptions, it is virtually impossible to say whether a "horror" movie is more similar to a "romance” than to a "comedy.” Since the system is seeking to understand the similarity (or distance) between two movies, for example, the system could have some mechanism or prism through which the system can determine similarity. For example, if the system knew the user's goal was emotional diversion, then fantasy might be quite similar to drama to achieve that goal. If the user's goal was to be inspired, then action might be quite similar to drama. The user's goal gives the system the necessary perspective needed to judge similarity of one genre to another.
  • Machine Learning could be used to iterate these mappings.
  • An Expert or Oracle of Delphi approach could be used to determine an original mapping.
  • Another possibility is to leverage text analysis of reviews or movie advertisements or plot descriptions to determine a good mapping.
  • Python diet (Dictionary) is set forth below:
  • Another category could be actors/actresses. However, just based on their identification number or name in the system, one has virtually no ability to determine how similar two actors are to each other. So, for actors too, perspective mapping is needed.
  • One possible perspective could be the distribution of film genres in which the actors/actresses have starred.
  • the primary similarity metric would then be: * Hj(k)) . In other words, it is the dot product of the two histograms and is sometimes called the cosine metric.
  • Many other perspectives are possible such as one that determines whether an actor is dominant in a particular genre (e.g., Sylvester Stallone with action) or whether an actor is a broad character actor with no particular preference (e.g., Meryl Streep).
  • N f The number of facets that are to be used for computing
  • W Vl A weight vector for view v t of relative importance of each facet in comparing similarity metrics between two content items.
  • N v The number of views used to select the next set of content items to display.
  • N P The number of elements in G p .
  • the system To compute the similarity between two content items, the system must first decide how to compute the similarity of a single facet from two content items. Since every facet is normalized to a range of [0, 1 ], the straight-forward method would be to use the Manhattan or Euclidean measure. In equation form, this would mean either
  • the weight vector w k allows the algorithm and system to adjust the relative importance of each of the facets to the overall computation.
  • One way to implement this method is to process the user's reaction to the particular PeakTM session as the base for deciding which new content to suggest.
  • user preferences and/or group behavior could be learned over time.
  • the base is the content that has been presented to the user in this session, indicated as the G p content.
  • the system can find content that is most appropriate to show based on the user's behavior to date.
  • PeakTM and the dynamic content delivery method is to avoid requiring the user to do anything, i.e., to provide passive content navigation where the system generates the inputs rather than putting the cognitive load on the user to continuously refine, e.g., directed queries.
  • the system can learn when an item is displayed, but no action by the user is taken.
  • the system can learn a bit more.
  • the system assumes that more interaction with the item indicates more interest by the user (except for the case when the user explicitly downgrades the film). For example, when the user acts on one film (image or item of interest) in the layout, the other images or items of interest in the layout are deemed to be of less interest.
  • An example for assessing user interaction could be to assign a ⁇ values as shown in Table 3 below.
  • a user may be presented with a Ul view such as that of Figure 6.
  • the "type of interaction" by the user is "no selection of any item on screen”
  • some or all of the plurality of images 602 are automatically cycled out and replaced with a new set of plurality of images 1400 as set forth in Figure 14.
  • the plurality of images 1402 are of different shapes, different locations, and different images than that of the plurality of images 602 of Figure 6.
  • each View is expressed in terms of a weight vector, for example.
  • the View weight vector sets the relative importance of each of the facets in determining similarity between candidate content and the content history. In essence, then, the View weight vector can be thought as a basis vector in the overall facet space.
  • the system may constantly recalculate the rankings and resort the items to be presented, based on interaction or lack thereof. If size is used to represent popularity, the a setting may not affect the size of the image.
  • the a setting of a non- interacted item that is being replaced would determine what item (or set of items) gets displayed next, but the item's popularity relative to the general populace is orthogonal to that - your top item may or may not be popular with the crowd. For example, noninteraction with an item uses the a setting to re-score all items with metadata overlap and re-rank everything. That new ranking can be used to determine size when the new item cycles in.
  • Other visualizations could do things very differently in how they choose to display size.
  • display size determines a setting, but the a setting may or may not determine the display size of future items.
  • the content provider could determine the size of an image based on whether the content provider desires to promote certain content. Hence, if certain content is promoted, that image for the content item may be of a larger size than other images displayed to grab the viewer's attention.
  • the group of the similarity metrics for a given View Vj is defined as follows:
  • the content in the layout group L is then shown/presented to the user. Based on the user's reactions, the next iteration of interesting content is prepared for the user as the next iteration of layout group L.
  • the content set G p is updated to include this set of content for this session of PeakTM operation.
  • Figure 15 illustrates a method of one of the exemplary embodiments of the invention.
  • the Ul view is presented displaying content associated with a plurality of different images.
  • input is determined. If, at step S102, it is determined that input is received where a user indicates an interest in an image by interacting with that image, then at step S106 that image of interest remains on the screen and the remaining images are replaced by images that represent related content to the image in which the user has an expressed an interest and displayed in a new Ul view at S100. Alternatively, if at step S102, it is determined that no input has been received or there has been no interaction by the user, then the Ul view is updated at S104 to update all of the images and replace them with new images.
  • step S108 it is determined if a selection has been made by the user with respect to one of the images. If an image has been selected by the user, then the content is displayed at step S1 10. If no image has been selected by the user at S108, then all of the images are updated again at S104 and displayed in a new Ul view at S100.
  • FIG. 16 Another method is shown in Fig. 16, which sets forth a method for dynamically displaying content to a user on a graphical user interface, wherein the content is represented by a plurality of different images, comprising: displaying the plurality of different images on the graphical user interface; receiving an input from the user; determining that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display an additional plurality of different images, wherein the additional plurality of different images are related to the one of the plurality of different images; selecting another one of the plurality of different images; and displaying the content represented by the another one of the plurality of different images.
  • FIG. 17 Another method is shown in Fig. 17, which sets forth a method for dynamically displaying content to a user on a graphical user interface displayed on a device, wherein the content is represented by a plurality of different images, comprising: displaying the plurality of different images on the graphical user interface; dynamically updating the displayed plurality of different images automatically every several seconds to replace the displayed plurality of different images with new displayed plurality of different images; receiving input via at least one sensor in a 3D pointing device held by the user, associated with movement of a cursor on the graphical user interface over the plurality of different images, wherein movement of the 3D pointing device corresponds with movement of the cursor to randomly access any portion of the graphical user interface displayed on the device; determining, based at least in part on a current position of the cursor, that the user has an interest in one of the plurality of different images; dynamically updating the displayed content on the graphical user interface to include the one of the plurality of different images and changing a remainder of the plurality of different images to display
  • the PeakTM Content Delivery Service can be implemented using one or more processors 1800 that are connected to one or more input devices 1802 and one or more output devices 1804 as shown in Figure 18.
  • Processor(s) 1800 are thus specially programmed to present PeakTM Content Delivery Service user interface screens which change over time as described above, both randomly and in response to a user's random access (pointing) cursor movements and/or button selections of content elements, flags and/or PeakTM icons as described above.
  • input device 1802 could thus be (or include) a 3D pointing device and output device 1804 could thus be (or include) a television, AR/VR device, mobile phone or the like.
  • processor(s) 1800 could reside within the television itself or a set-top box or another device connected to the television, like a game console or the user's smart phone. If used in a tablet, the processor(s) 1800, input device(s) 1802, and output device(s) 1804 could all reside within a single housing and be portable.
  • elements of the Peak Content Delivery Service could be pushed to the local system 1800, 1802, and 1804 from a remotely located server 1806 via, e.g., the Internet or a cable or satellite media connection.
  • the PeakTM Content Delivery Service is explicitly imagined as a multiuser, multi-platform system with the ability to learn relationships and interests across a collection of users and content.
  • the PeakTM Content Delivery Service could of course be implemented for just a single user with the learning then restricted to that particular user.
  • Figure 19 illustrates a brief overview of the PeakTM Content Delivery Service method 1900.
  • the metadata from various Content Sources 1902 as well as Global Context 1904 such as weather drives the system's User Interface operation shown in the right-hand side of the diagram.
  • the loop starts with an auto-generated query 1916 of the metadata 1906 for the first set of content to show the user.
  • the appropriate content is then selected and ordered 1910 for presentation (a group that here is referred to as the Layout group).
  • the machine learning 1912 determines the views and perspectives 1914 presented to the user.
  • a Snap 1908 is formed for display to the user.
  • the last step involves the user either deliberately requesting more information on particular displayed content or simply waiting for something more interesting to be displayed.
  • a new auto-generated query 1916 is formed and the loop begins again. The result is a mostly passive, guided journey through content of potential interest to the user - a journey that is both rewarding and fun.
  • a determination or indication of a user's interest in a particular image (or other discoverable content) can be based on one or more inputs or actions including, but not limited to, cursor position, cursor movement, remote control input (e.g., button press, button release, scroll wheel movement, OFN detections), voice input, eye-tracking, selection, hovering, focus placement on the image, etc.
  • a user's lack of interest may also be determined or indicated by a lack of one or more these inputs.
  • "interest” can be valued at different levels based upon a number or quality of inputs made by the user with respect to a given image (or other discoverable content).
  • general context can enhance the significance of user action or inaction. For example, if a particular content item takes up half the screen and the user still does not indicate interest, that indicates a higher level of disinterest than if the content item only took up 1 /10th of the screen. Conversely, if a user deliberately selects a content item even though its visual representation (Snap) is the smallest on the screen, that indicates a higher level of interest than if the content item took up half the screen. Similarly, a user's pattern of interest compared with general popularity can indicate a different level of interest. If the user selects an item on screen that is among the least popular shown, that is more significant than if the user picks one that is the most popular shown.
  • Systems and methods for processing data according to exemplary embodiments of the present invention can be performed by one or more processors executing sequences of instructions contained in a memory device. Such instructions may be read into the memory device from other computer-readable mediums such as secondary data storage device(s). Execution of the sequences of instructions contained in the memory device causes the processor to operate, for example, as described above. In alternative embodiments, hard-wire circuitry may be used in place of or in combination with software instructions to implement the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Les systèmes et procédés selon la présente invention concernent la découverte et l'affichage dynamiques d'un contenu, représenté par une pluralité d'images différentes sur une interface graphique utilisateur. Sur la base de l'interaction de l'utilisateur, qu'il y ait une interaction explicite ou aucune interaction, un contenu est mis à jour et affiché à un utilisateur pour une manipulation ultérieure.
PCT/US2017/012284 2016-01-05 2017-01-05 Systèmes et procédés de distribution de contenu Ceased WO2017120300A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662274989P 2016-01-05 2016-01-05
US62/274,989 2016-01-05

Publications (1)

Publication Number Publication Date
WO2017120300A1 true WO2017120300A1 (fr) 2017-07-13

Family

ID=59274433

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/012284 Ceased WO2017120300A1 (fr) 2016-01-05 2017-01-05 Systèmes et procédés de distribution de contenu

Country Status (1)

Country Link
WO (1) WO2017120300A1 (fr)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6175362B1 (en) * 1997-07-21 2001-01-16 Samsung Electronics Co., Ltd. TV graphical user interface providing selection among various lists of TV channels
US6295646B1 (en) * 1998-09-30 2001-09-25 Intel Corporation Method and apparatus for displaying video data and corresponding entertainment data for multiple entertainment selection sources
US7797713B2 (en) * 2007-09-05 2010-09-14 Sony Corporation GUI with dynamic thumbnail grid navigation for internet TV
US7839385B2 (en) * 2005-02-14 2010-11-23 Hillcrest Laboratories, Inc. Methods and systems for enhancing television applications using 3D pointing
US20110219395A1 (en) * 2006-08-29 2011-09-08 Hillcrest Laboratories, Inc. Pointing Capability and Associated User Interface Elements for Television User Interfaces
US20120086711A1 (en) * 2010-10-12 2012-04-12 Samsung Electronics Co., Ltd. Method of displaying content list using 3d gui and 3d display apparatus applied to the same
US8261209B2 (en) * 2007-08-06 2012-09-04 Apple Inc. Updating content display based on cursor position
US20130097542A1 (en) * 2011-04-21 2013-04-18 Panasonic Corporation Categorizing apparatus and categorizing method
US8760400B2 (en) * 2007-09-07 2014-06-24 Apple Inc. Gui applications for use with 3D remote controller
US20140337749A1 (en) * 2013-05-10 2014-11-13 Samsung Electronics Co., Ltd. Display apparatus and graphic user interface screen providing method thereof
WO2014194148A2 (fr) * 2013-05-29 2014-12-04 Weijie Zhang Systèmes et procédés impliquant une interaction utilisateur à base de gestes, une interface utilisateur et/ou d'autres éléments
US8935630B2 (en) * 2005-05-04 2015-01-13 Hillcrest Laboratories, Inc. Methods and systems for scrolling and pointing in user interfaces
US20150074552A1 (en) * 2013-09-10 2015-03-12 Opentv, Inc System and method of displaying content and related social media data

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6175362B1 (en) * 1997-07-21 2001-01-16 Samsung Electronics Co., Ltd. TV graphical user interface providing selection among various lists of TV channels
US6295646B1 (en) * 1998-09-30 2001-09-25 Intel Corporation Method and apparatus for displaying video data and corresponding entertainment data for multiple entertainment selection sources
US7839385B2 (en) * 2005-02-14 2010-11-23 Hillcrest Laboratories, Inc. Methods and systems for enhancing television applications using 3D pointing
US8935630B2 (en) * 2005-05-04 2015-01-13 Hillcrest Laboratories, Inc. Methods and systems for scrolling and pointing in user interfaces
US20110219395A1 (en) * 2006-08-29 2011-09-08 Hillcrest Laboratories, Inc. Pointing Capability and Associated User Interface Elements for Television User Interfaces
US8261209B2 (en) * 2007-08-06 2012-09-04 Apple Inc. Updating content display based on cursor position
US7797713B2 (en) * 2007-09-05 2010-09-14 Sony Corporation GUI with dynamic thumbnail grid navigation for internet TV
US8760400B2 (en) * 2007-09-07 2014-06-24 Apple Inc. Gui applications for use with 3D remote controller
US20120086711A1 (en) * 2010-10-12 2012-04-12 Samsung Electronics Co., Ltd. Method of displaying content list using 3d gui and 3d display apparatus applied to the same
US20130097542A1 (en) * 2011-04-21 2013-04-18 Panasonic Corporation Categorizing apparatus and categorizing method
US20140337749A1 (en) * 2013-05-10 2014-11-13 Samsung Electronics Co., Ltd. Display apparatus and graphic user interface screen providing method thereof
WO2014194148A2 (fr) * 2013-05-29 2014-12-04 Weijie Zhang Systèmes et procédés impliquant une interaction utilisateur à base de gestes, une interface utilisateur et/ou d'autres éléments
US20150074552A1 (en) * 2013-09-10 2015-03-12 Opentv, Inc System and method of displaying content and related social media data

Similar Documents

Publication Publication Date Title
US8359545B2 (en) Fast and smooth scrolling of user interfaces operating on thin clients
US7386806B2 (en) Scaling and layout methods and systems for handling one-to-many objects
US8521587B2 (en) Systems and methods for placing advertisements
US8935630B2 (en) Methods and systems for scrolling and pointing in user interfaces
US20060262116A1 (en) Global navigation objects in user interfaces
US8850478B2 (en) Multimedia systems, methods and applications
US9576033B2 (en) System, method and user interface for content search
JP5662569B2 (ja) 複数ドメイン検索からコンテンツを除外するシステムおよび方法
US20070067798A1 (en) Hover-buttons for user interfaces
US9459783B2 (en) Zooming and panning widget for internet browsers
US20120271711A1 (en) Overlay device, system and method
EP2948827B1 (fr) Procédé et système de découverte de contenu
WO2017120300A1 (fr) Systèmes et procédés de distribution de contenu
US20170180670A1 (en) Systems and methods for touch screens associated with a display

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17736304

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17736304

Country of ref document: EP

Kind code of ref document: A1