WO2010078650A1 - Identification, recommendation and delivery of relevant media content - Google Patents
Identification, recommendation and delivery of relevant media content Download PDFInfo
- Publication number
- WO2010078650A1 WO2010078650A1 PCT/CA2010/000010 CA2010000010W WO2010078650A1 WO 2010078650 A1 WO2010078650 A1 WO 2010078650A1 CA 2010000010 W CA2010000010 W CA 2010000010W WO 2010078650 A1 WO2010078650 A1 WO 2010078650A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- media
- metadata
- emotive
- media segment
- segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N7/17309—Transmission or handling of upstream communications
- H04N7/17318—Direct or substantially direct transmission and handling of requests
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25866—Management of end-user data
- H04N21/25891—Management of end-user data being end-user preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/266—Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
- H04N21/2668—Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/432—Content retrieval operation from a local storage medium, e.g. hard-disk
- H04N21/4325—Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6582—Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
Definitions
- the invention relates generally to the identification, recommendation and delivery of relevant media content based on metadata creation and analysis. More specifically, the invention relates to segmenting, annotating, and distributing media content based on emotive metadata.
- aspects of the present invention address the aforementioned need by providing methods and systems that enable the presentation of relevant advertising and other media content to users experiencing media at moments of heightened emotional engagement.
- advertisements inserted within media to be less intrusive, not only do they need to be relevant, but they need to be presented when the audience is in a suitable emotive state.
- These moments serve as a catalyst for significant gains in efficiency and effectiveness for both advertisers and publishers alike.
- a computer-implemented method of generating metadata in association with a media segment of a media content item comprising the steps of: providing at least a portion of the media content item to a user for playback on a user interface of a media device; receiving input from the user defining a starting point of the media segment and an ending point of the media segment; and recording, in association with the media content item, metadata corresponding to the starting and ending points within the media content item and metadata relating to the user.
- a computer- implemented method of providing additional media content to a user based on emotive metadata associated with a media segment being played on a user interface of a media device comprising the steps of: a) searching metadata associated with a collection of media segments to obtain a list of matching media segments, wherein each media segment in the list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with the emotive metadata of the media segment being played, b) selecting a matching media segment from the list of matching media segments, provided that the selected matching media segment has not been previously selected during a current user session; and c) providing said selected media segment for playback upon completion of said media segment being played.
- a computer- implemented method of inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with the media content item and emotive metadata associated with the additional media segment comprising the steps of: a) identifying an emotive media segment within the media content item, the emotive media segment comprising a portion of the media content item, wherein the emotive media segment has associated therewith emotive metadata; b) searching metadata associated with a collection of media segments for a list of matching media segments, wherein each media segment in the list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with the emotive metadata of the emotive media segment, c) selecting a matching media segment from the list of matching media segments; and d) inserting the selected matching media segment into the media content item after the emotive media segment.
- a computer- implemented method of inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with the media content item and emotive metadata associated with the additional media segment comprising the steps of: a) identifying an emotive media segment within the media content item, the emotive media segment comprising a portion of the media content item, wherein the emotive media segment has associated therewith emotive metadata having at least one emotive metadata element common with the emotive metadata of the additional media segment, and b) inserting the additional media segment into the media content item after the emotive media segment.
- Figure 1 shows a flow chart illustrating a method of generating metadata associated with a media segment according to a first embodiment of the invention.
- Figure 2 shows a diagram illustrating a media device for use in playback of media content.
- Figure 3 shows a diagram illustrating an emotive metadata schema.
- Figure 4 shows a flow chart illustrating a method of generating metadata associated with a media segment according to a another embodiment of the invention.
- Figure 5 is a diagram illustrating a system including a media device for media playback and a server communicating with the media device.
- Figure 6 is a flow chart illustrating a method of providing additional relevant media segments to a user viewing a first media segment.
- Figure 7 is an example of a user interface for presenting and recommending video media content according to an embodiment of the invention.
- Figure 8 is an example illustrating the use of a user interface to facilitate a transaction based on transaction metadata.
- Figure 9 is a flow chart illustrating a method of inserting an emotively relevant media segment into a media content item based on emotive metadata.
- the systems described herein are directed to methods and systems of providing relevant media content to users.
- embodiments of the present invention are disclosed herein. However, the disclosed embodiments are merely exemplary, and it should be understood that the invention may be embodied in many various and alternative forms.
- the terms, “comprises” and “comprising” are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in this specification including claims, the terms, “comprises” and “comprising” and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps or components.
- the terms “about” and “approximately, when used in conjunction with ranges of dimensions of particles, compositions of mixtures or other physical properties or characteristics, is meant to cover slight variations that may exist in the upper and lower limits of the ranges of dimensions so as to not exclude embodiments where on average most of the dimensions are satisfied but where statistically dimensions may exist outside this region. It is not the intention to exclude embodiments such as these from the present invention.
- the coordinating conjunction "and/or” is meant to be a selection between a logical disjunction and a logical conjunction of the adjacent words, phrases, or clauses.
- the phrase “X and/or Y” is meant to be interpreted as “one or both of X and Y” wherein X and Y are any word, phrase, or clause.
- the term “media content item” means any form of digital media that may be segmented, including, but not limited to, video, audio, animations, slideshows, electronic text, and combinations thereof.
- a media content item may be stored in any format, or copies of the same or similar media file may be stored in multiple formats.
- Non-limiting examples of media content items include movies, music videos, television show episodes, video presentations, home and amateur videos, video advertisements, songs, audio advertisements, audiobooks, electronic books, and any portion thereof.
- Metadata refers to data associated with a media content item that provides information about or related to the content item.
- metadata may include a plurality of metadata elements. Each of the plurality of metadata elements may provide unique descriptive information relating to a content item. An association may be created to link a media content item to its related metadata. Therefore, metadata or metadata files may be provided in a database and searched in order to identify and locate a desired media content item. Accordingly, metadata associated with a media content item may facilitate the electronic delivery of the media content item in a digital format.
- the term "emotive metadata” means metadata related to human behavioral traits.
- Emotive metadata may relate to emotions displayed or conveyed by the media content and/or may relate to emotions experienced by a media user when playing or viewing the media content.
- Metadata associated with a media content item may comprise emotive data and other forms of metadata, such as, but not limited to, descriptive metadata, metadata related to the facilitation of a transaction associated with a product or service, and metadata related to the location of a media content item.
- "Emotive metadata” may also comprise metadata relating to the user including the user's interests, interest drivers, personal values, personality attributes, and any combination thereof.
- the term "media segment” means any portion of a media content item for which associated metadata exists.
- media segments include a video clip within a video file such as a scene from a movie, an audio clip within an audio file such as a chorus of a song, or a portion of an electronic book such as a chapter or a paragraph.
- Starting and ending points of a media segment may be defined by a user or pre-defined and referenced by a user.
- a media segment may comprise any portion of a media content item, including the entirety of the media content item.
- the term "user” means any person viewing, reading or otherwise experiencing the playback of a media content item or media segment, for any purpose.
- a user may be an end consumer experiencing the playback of media for purposes of enjoyment, education, or other forms of media consumption.
- a user may be person experiencing the playback of media for commercial purposes, such as, but not limited to, analysis media content for the creation of metadata for use in searching and/or insertion and/or development of relevant advertising.
- a media segment forms a portion of a media content item such as a digital video or a song.
- the media segment may be defined by the user by specifying a starting and ending point within the media content item, or alternatively may be identified by selecting a pre-defined segment of a media content item.
- the resulting media segment which relates to a "moment" of relevance or interest to a user, is thus defined by metadata describing the starting and ending points of the media segment, and metadata associated with the user selecting the media segment.
- Defining media segments annotated with user-relevant metadata provides a number of advantages for more semantic and efficient media searching and retrieval, media content generation, and media content delivery.
- the metadata associated with media segments as defined herein enables dramatically improved media content granularity for suggesting and providing relevant media to users.
- this enables advertisers to target users with additional media segments having relevant and contextual messaging, and to include transaction metadata in such media segments that enables users to engage immediately in transactions related to products and services.
- An exemplary yet non-limiting use of media segments defined according to this embodiment of the invention includes the insertion of additional user- relevant media segments, such as advertisements, based on correlations between the media segment metadata and metadata relating to the additional media segments.
- An additional non-limiting application of media segments defined according to the present embodiment includes the delivery of additional media content, such as additional media segments having metadata correlated with metadata associated with a media segment being played by a user.
- Figure 1 provides a flow chart illustrating a method of generating metadata relating to a media segment according to the embodiment discussed above, from the perspective of a user.
- step 100 at least a portion of a media content item is played by a user on a media device.
- the portion may include the entirety of the media content item. The user thus views, plays, reads or otherwise experiences the media content item.
- the media device may be any device or system capable of displaying, playing and/or presenting media content.
- media devices include handheld audio and/or video players such as an iPod®, television, smart phones, tablets, kiosks, and a display connected to computer.
- media may be locally provided by the media device and/or local systems operatively connected to the media device, and/or media may be remotely provided to the media device either via a server or through a local system or application.
- FIG. 2 illustrates a non-limiting example of a media device that may be used by the user according to embodiments of the invention.
- the media device shown generally at 150, comprises a media presentation module 160, an input module 170, and a memory and processor module 180.
- Modules 160-180 may be housed in a single device 190 such as a handheld media device, or may form a system comprising separate devices such as a computer system comprising a computer processor, a monitor and a keyboard and mouse.
- the user while playing the media content item, experiences content of interest or relevancy occurring over a media segment, determines starting and ending points of the media segment in step 105, and provides input defining the starting and ending points in step 110.
- starting and ending points may take on many forms according to different aspects of the invention.
- Non-limiting examples of starting and ending points include time stamps relative to a reference time point associated with the media content item, scene changes, cue points, references to dialogue, and frames within a video. Accordingly, starting and ending points may be determined by the user by referencing such exemplary points.
- the user may reference pre-determined sections of a media content item, such as chapters in an electronic book or scenes in a movie. In such cases, a single reference by a user to a pre-determined section provides the required information to determine the starting and ending points of the chosen media segment.
- the starting and ending points provided or inferred based on the user input form metadata that is associated with the media segment in step 115. This metadata is recorded in association with the media segment.
- step 120 which is preferably included in the method, the user provides additional metadata relating to the media segment.
- metadata provided by the user or associated with the user is recorded in association with the media segment.
- the metadata provided by the user describing the media segment may comprise multiple types of user-specific metadata.
- the metadata may comprise a user identifier, such as, but not limited to, a user ID, a user name, a user icon, a user group, and other forms of user identification known to those skilled in the art. Accordingly, a media segment as so defined provides an indication that the media segment was of relevance to a particular user or user group. Such information may be used to provide search results and/or recommendations to related users or user groups.
- a second user may obtain and play the media segment by searching for media segments having such metadata.
- this form of media segment generation allows users to search for, or obtain recommendations to, specific and relevant segments within media content items that may serve as launching points for further media content discovery.
- Metadata relating to the user may additionally or alternatively include metadata describing the user's interests, interest drivers, personal values, personality attributes, and any combination thereof.
- the user provides metadata relating to a media segment that may comprise metadata describing how or why the media segment is of interest to the user.
- metadata comprises emotive metadata that is indicative of one or more emotional responses experienced by a user when playing or viewing the media segment, and/or metadata that relates to emotions exhibited by or within the media itself, such as emotions of a character in a movie.
- Such emotive metadata preferably relates to a vocabulary enabling specific emotive terms to be applied as metadata.
- the emotive metadata vocabulary may comprise metadata elements relating to the aspirations, interest drivers, personal values, and emotions.
- a specific emotive metadata vocabulary is provided in Example 1 below.
- Figure 3 illustrates specific embodiment of an emotive metadata schema in which emotive metadata may comprise elements relating to self-interest drivers, personality, and emotion.
- a user may be a media owner or other party that generates metadata relating to media segments for commercial purposes, such as to aid in the discovery of media segments within selected media content items that are available for purchase.
- a user may generate metadata relating to a media segment that is an advertisement or relates to a product or service to aid in the discovery of such a media segment by end users.
- it may be desirable for a media owner to engage, for example through employment or other incentives, users to generate emotive metadata identifying media segments to increase the value and discovery potential of the media content item.
- the present invention further contemplates a method of generating metadata by providing media content to a user and receiving input from the user for the purpose of generating user-related metadata defining a media segment.
- Figure 4 provides a flow chart illustrating a method of generating and recording user-specific metadata relating to a media segment selected by a user.
- step 200 at least a portion of a media content item is provided to a media device for playback.
- the media content may reside on a local resource such as a local server or database, or may reside on a remote database.
- a preferred embodiment is shown in Figure 5, where a media device 150 connects through an internal or external network 300 to communicate with a server 310 running an application program interface (API) 320.
- the API 320 serves the media device 150 with media content items residing in media content database 340.
- step 205 input is received from the user defining the starting and ending points of the media segment.
- this input is received by API 320.
- step 210 metadata provided by the user relating to the media segment is received.
- step 215 metadata corresponding to the starting and ending positions of the media segment is associated with media segment and stored either in a metadata database (shown at 330 in Figure 5) or appended to the media content item residing in media content database 340.
- step 220 the user-specific metadata relating to the media segment is also stored either in metadata database 330 or appended to the media content item residing in media content database 340.
- Figure 4 provides a specific and non-limiting embodiment showing a system for practicing an embodiment of the present invention, and those skilled in the art will readily appreciate that other variants of the system architecture may be possible, and these variants are encompassed within the present invention. While the preceding embodiments describe methods for generating metadata relating to media segments based on user input, metadata relating to user-selected media segments may be additionally or alternatively obtained through automated metadata extraction methods. For example, automated processing using techniques such as text, audio and video analysis may be employed to extract metadata from media segments.
- media segments may be by a user without appending user metadata, yet still representing a media segment of interest to a user.
- a media segment initially having no metadata associated therewith, may be appended with metadata, such as emotive metadata, using automated metadata extraction methods as discussed below.
- the media segment of interest to the user may be selected by the user, and the metadata corresponding to the media segment selected by the user may be automatically determined and associated with the selected media segment.
- a media device may be employed by a user to define a media segment based on sampling of a portion of a media content item being played on a separate media presentation device, and cross-referenced with the full media content item based on automated metadata analysis.
- a media segment may be defined based on a user viewing a video and using a media device capable of recording an audio portion of the video being viewed.
- the user may define a media segment within the audio portion recorded by the media device, and subsequently the media segment may be analyzed by performing metadata extraction of the recorded audio portion of the media segment and correlating the metadata with the full media content item for creation of the full media segment with or without associated metadata.
- metadata associated with a media segment as defined herein involves the incorporation of metadata from multiple sources, such as through an iterative learning process in which correlations are established between the audiovisual properties of film clips, information extracted from collateral texts and user-provided metadata relating to a media segment.
- the media content item is video content relating to films, which enables the extraction and discovery of semantically richer metadata due to conventions followed by film-makers that aids in the construction of emotive metadata vocabularies. For example, conventions are followed in the way that videos tell stories, albeit to varying degrees, including the use of editing techniques and formulaic plot structures. Knowledge of such 'film grammar 1 can be exploited in the development of multimedia information extraction technologies, particularly with regard to metadata vocabularies, for video content relating to film.
- a user may share media segments with other users, and additional metadata may be collected relating to the identity, type, and interests of users with whom the media segments are shared. For example, metadata describing the type of a user (for example, partners, parents, close friends, and work colleagues) with whom a media segment is shared may be recorded in association with a media segment. Additionally, a user's viewing patterns may be monitored to gather metadata relating to preferred sequences of media segments, and this metadata may be recorded to aid in the suggestion and recommendation of additional media segments.
- additional metadata relating to a media segment may be obtained by searching online resources, for example, websites such as imdb.com to obtain descriptive metadata such as the Title, Year, Genre, Director, Actors, etc.
- Metadata may be obtained by matching popular results for a video clip from YouTube® against a media segment, and leverage comments made about a media segment by YouTube® users for metadata annotation.
- lists of famous film quotes could be used to obtain additional metadata by matching them against time-coded subtitles.
- Metadata may be extracted from audio sources by using a speech-to-text converter and subsequently employing an algorithm such as a natural language processing algorithm for metadata extraction. For example, retrieving media segments featuring a car becomes a matter of looking for 'car 1 in the screenplay or audio clip.
- emotive metadata associated with a media segment being played by a user is used to provide to the user at least one additional media segment having related emotive metadata.
- This embodiment enables a new manner in which a user may experience media, where media discovery is provided by emotive metadata linking separate media segments that need not belong to a common parent media content item.
- Prior art media selection and recommendation methods have focused on providing and recommending media content based on generic descriptive or content-related metadata.
- the present embodiment provides a completely different media playback and browsing experience for a user by preserving the emotional state or mood associated with the viewing of a media segment.
- the present and inventive method departs from the "more is better” teachings of the prior art, and instead discloses a focused "less is more” approach to focused emotive metadata based recommendation and delivery.
- Such embodiments provide a potentially rich media playback experience in which an emotion, mood, or feeling associated with the playback of a first media segment may be at least in part preserved during the playback of an additional and automatically selected media segment. All prior art media selection and recommendation methods known to the inventors fail to deliver such an experience.
- a preferred method is illustrated generally by the flow chart shown in
- step 400 emotive metadata associated with a media segment being played by a user on a media device is used to search a collection of media segments for a list of media segments, where each matching media segment in the list has at least one common emotive metadata element.
- This step provides a list of emotively relevant media segments that may be subsequently provided to the user for playback.
- step 405 a media segment is selected from the list of matching media segments.
- the selected matching media segment is provided to the user in step 410 for playback.
- one or more additional media segments from the list of matching media segments may be provided to the user after providing a first matching media segment.
- a matching media segment for recommendation, delivery and/or playback may be selected using a wide range of criteria.
- a matching media segment is selected from the list of matching media segments based on a ranking of relevant emotive metadata matches. For example, a media segment having emotive metadata with the largest number of matches with the emotive metadata associated with the media segment being viewed by the user may be selected. It is to be understood that many other selection rules known to those skilled in the art are within the scope of the present invention.
- a matching media segment may be selected randomly from the list of media segments.
- a matching media segment recommended, delivered and/or played according to the present embodiment is a media segment that has not already been played during a current user session. This provision avoids repeated playback of media already recommended and viewed.
- the collection of media segments may be located on a local or remote resource or a combination of the two.
- media segments may be stored and/or cached in a local memory or database resource such as flash memory or a hard drive.
- at least a portion of the collection of media segments is stored on a server.
- the collection of media segments may be stored in media content database 340, either with associated metadata appended to the media segments, or with associated metadata stored in a separate (either physically or logically) database such as metadata database 330.
- the searching, selection and delivery of additional media segments to the user may be carried out by an application programming interface 320 operating on a server 310, where the server is connected to the media device through a local or remote network.
- server 310 is connected to media device 150 through the internet.
- Media segments may be provided to media device 150 via one of many methods known to those skilled in the art, including, but not limited to, streaming a media segment and uploading a media segment to the media device for playback.
- additional network and/or system elements may used to facilitate the method, including the use of additional devices and services such as web servers and firewalls.
- the system may further interface with proprietary and/or third party streaming media applications.
- the system may be configured as or interfaced with a social media application, wherein users may access, discover and share media segments.
- the emotive metadata associated with the media segment being played and/or the collection of media segments preferably includes emotive metadata elements that pertain to an emotive response of a user when viewing a given media segment.
- the emotive metadata may relate to emotions exhibited by or within the media itself, such as emotions of a character in a movie.
- the emotive metadata elements may belong to an emotive metadata vocabulary.
- media segments are played back to the user on a media device that comprises a user interface.
- the user interface enables the user to view, read or otherwise playback media segments according to the method disclosed above.
- a non-limiting example of a user interface for displaying video media content according to one embodiment of the invention is shown in Figures 7 and 8.
- the user interface 500 comprises a display area where video media content is played back to a user.
- An optional progress bar is included at 510 that may further include controls relating to playback and volume.
- the user interface preferably includes metadata information 520 relating to the identity of the media segment being played, and additional controls 530 relating to additional media playback.
- the user interface preferably includes a control for requesting the delivery and playback of additional media segments having related emotive metadata according to the embodiment of the invention.
- controls 530 may be included for replaying a selected media segment, sharing a preferred media segment with another user, and adding user- specific metadata related to the clip (in accordance with previously recited embodiments of the present invention).
- the user interface may be provided on wide range of media presentation devices, for example, a touchscreen device and a display device having an input interface such as a mouse and keyboard.
- metadata associated with a matching media segment additionally comprises transaction metadata that facilitates a transaction related to a product or service, where the product or service is preferably associated with content within the matching media segment.
- transaction metadata may be utilized in many different methods to facilitate a transaction.
- Non-limiting examples of transaction-based metadata includes metadata linking the user to a transaction such as a web page, email address, or IP address.
- Specific non-limiting transaction based metadata includes metadata that can be displayed in the user interface as a link (for example, to a web page or telephone number where a user may purchase a product or service and/or initiate a call to inquire for more information regarding a product or service).
- the transaction metadata may alternatively be rendered as a button or other selectable item on the user interface.
- the matching media segment is a portion of a media content item (such as a movie or song) and the transaction metadata facilitates a transaction by which a user may playback, rent or purchase at least a portion of the media content item (as illustrated in Figure 8).
- the matching media segment is a portion of a media content item
- the transaction metadata associated with the media segment allows a user to playback, rent or purchase a media segment that is sequentially related to the media segment currently being viewed, thereby facilitating continued viewing of a media content item.
- a method for inserting an additional media segment within a media content item, where the media segment has associated emotive metadata that is related to that of a media segment identified within the media content item. This embodiment enables the insertion of emotively relevant content into a media content item.
- an emotive media segment having emotive metadata is identified within a media content item.
- the emotive media segment may be identified based on emotive metadata created using, for example, previously disclosed embodiments of the present invention.
- the emotive media segment may be obtained by searching within metadata associated with the media content item for a specific emotive metadata element and retrieving thereby obtaining an emotive media segment having associated emotive metadata including the specific emotive metadata element.
- a collection of media segments is searched to obtain a list of matching media segments having metadata in which at least one emotive metadata element is common with the metadata relating to the emotive media segment identified in the previous step.
- a matching media segment is selected from the list for insertion.
- the matching media segment may be selected based on a wide range of criteria.
- the matching media segment may be selected based on a ranking of the matches, random selection, or, in the case of the collection of media segments being advertisements, contractual agreements pertaining the insertion frequency and/or priority.
- the selected matching media segment is inserted into the media content item after the identified emotive media segment.
- an additional media segment having associated emotive metadata may be pre-selected for insertion within a media content item. Accordingly, an emotive media segment after which the additional media segment is to be inserted may be identified by searching for a pre-defined media segment within the media content item that has associated metadata including at least one emotive metadata element common with the emotive metadata associated with the additional media segment.
- the insertion of the additional media segment may be made prior to playback of the media content item by a user, or dynamically during playback of the media content item by the user.
- the emotive metadata associated with the emotive media segment is user-specific emotive metadata, enabling the insertion of additional media segments targeting the emotive aspects of the user directly.
- the media content item and/or additional media segments may be stored within a local or remote resource, or a combination of the two.
- media segments may be stored and/or cached in a local memory or database resource such as flash memory or a hard drive.
- a server such an embodiment may be illustrated again with reference to Figure 5.
- the media content item, and its associated media segments may be stored in media content database 340, either with associated metadata appended to the media segments, or with associated metadata stored in a separate (either physically or logically) database such as metadata database 330. Additional media segments for insertion may be stored in a common or separate resource.
- the identification of an emotive media segment and/or searching, selection and delivery of additional media segments for insertion may be carried out by an application programming interface 320 operating on a server 310, where the server is connected to the media device through a local or remote network.
- server 310 is connected to media device 150 through the internet.
- Media content may be provided to media device 150 via one of many methods known to those skilled in the art, including, but not limited to, streaming a media segment and uploading a media segment to the media device for playback.
- additional network and/or system elements may used to facilitate the method, including the use of additional devices and services such as web servers and firewalls.
- the system may further interface with proprietary and/or third party streaming media applications.
- the media segment inserted within the content item is an advertisement.
- the insertion of an emotively relevant advertising media segment after a related media segment in a media content item assists in preserving the emotion, or the moment, experienced by a viewer, and thereby delivers a far less disruptive user viewing experience.
- the insertion of emotively relevant media segments, particularly those with additional transaction metadata facilitates improved advertising conversion rates.
- the emotive metadata either in the media segment identified within the media content item, or in the additional media segment to be inserted is indicative of an anticipated heightened emotional state of a user.
- the metadata associated with the additional media segment to be inserted preferably additionally comprises transaction metadata that facilitates a transaction related to a product or service, where the product or service is preferably associated with content within the inserted media segment.
- transaction-based metadata may be utilized in many different methods to facilitate a transaction.
- non-limiting examples of transaction- based metadata includes metadata linking the user to a transaction with a link such as a web page, email address, telephone number or IP address.
- Specific non-limiting transaction based metadata includes metadata that can be displayed in a user interface as a link to a web page where a user may purchase a product or service and/or inquire for more information regarding a product or service.
- the transaction metadata may alternatively be rendered as a button or other selectable item a the user interface.
- relevancy tags are grouped into three categories: value profiles, interest drivers, and emotions.
- value profile tags include:
- Examples of positive emotion tags include:
- negative emotion tags examples include: • Outraged
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Graphics (AREA)
- Game Theory and Decision Science (AREA)
- Human Computer Interaction (AREA)
- Economics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention provides systems and methods for identifying, recommending and delivering media content based on the use of media segments having associated metadata. In one aspect, the invention provides a method of generating user-related metadata relating to user-selected media segments, where the metadata is preferably emotive in nature. Methods are also provided for presenting additional media segments to users, and inserting additional media segments with media content, based on related emotive metadata.
Description
IDENTIFICATION, RECOMMENDATION AND DELIVERY OF RELEVANT
MEDIA CONTENT
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority to U.S. Provisional Application No.
61/204,426, filed on January 7th, 2009, the entire contents of which are incorporated herein by reference.
FIELD OF THE INVENTION The invention relates generally to the identification, recommendation and delivery of relevant media content based on metadata creation and analysis. More specifically, the invention relates to segmenting, annotating, and distributing media content based on emotive metadata.
BACKGROUND OF THE INVENTION
Media viewers are faced with an increasingly complex challenge of sorting through hundreds of thousands of media programs, clips and other forms of content to determine which are relevant and of interest. Today, viewers are limited to inefficient and non-semantic searching methods to sort through media content. The result is a very long tail of quality produced content that many viewers will never be able to uncover.
In the field of advertising, brand creation and conveyance of brand messages are becoming increasingly difficult as thousands of channels, video
streaming websites, and other forms of media distribution vie for viewers' attention. Audiences are easily distracted and have adapted to the disruption caused by current forms of advertising by skipping advertising segments or tuning out. For example, users find current forms of in-stream video advertising to be intrusive and irrelevant. Because of this, research shows that they often avoid ad-supported video all together, or in some cases, go to great lengths to obtain content illegally.
Accordingly, there is a need for methods and systems that provide media content, including advertising content, to users in a manner that is both relevant and non-disruptive.
SUMMARY OF THE INVENTION
Aspects of the present invention address the aforementioned need by providing methods and systems that enable the presentation of relevant advertising and other media content to users experiencing media at moments of heightened emotional engagement. For advertisements inserted within media to be less intrusive, not only do they need to be relevant, but they need to be presented when the audience is in a suitable emotive state. These moments serve as a catalyst for significant gains in efficiency and effectiveness for both advertisers and publishers alike.
Accordingly, in a first aspect of the invention, there is provided a computer-implemented method of generating metadata in association with a media segment of a media content item, the method comprising the steps of:
providing at least a portion of the media content item to a user for playback on a user interface of a media device; receiving input from the user defining a starting point of the media segment and an ending point of the media segment; and recording, in association with the media content item, metadata corresponding to the starting and ending points within the media content item and metadata relating to the user.
In another aspect of the invention, there is provided a computer- implemented method of providing additional media content to a user based on emotive metadata associated with a media segment being played on a user interface of a media device, the method comprising the steps of: a) searching metadata associated with a collection of media segments to obtain a list of matching media segments, wherein each media segment in the list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with the emotive metadata of the media segment being played, b) selecting a matching media segment from the list of matching media segments, provided that the selected matching media segment has not been previously selected during a current user session; and c) providing said selected media segment for playback upon completion of said media segment being played.
In another aspect of the invention, there is provided a computer- implemented method of inserting an additional media segment within a media
content item based on a relationship between emotive metadata associated with the media content item and emotive metadata associated with the additional media segment, the method comprising the steps of: a) identifying an emotive media segment within the media content item, the emotive media segment comprising a portion of the media content item, wherein the emotive media segment has associated therewith emotive metadata; b) searching metadata associated with a collection of media segments for a list of matching media segments, wherein each media segment in the list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with the emotive metadata of the emotive media segment, c) selecting a matching media segment from the list of matching media segments; and d) inserting the selected matching media segment into the media content item after the emotive media segment.
In yet another aspect of the invention, there is provided a computer- implemented method of inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with the media content item and emotive metadata associated with the additional media segment, the method comprising the steps of: a) identifying an emotive media segment within the media content item, the emotive media segment comprising a portion of the media content item, wherein the emotive media segment has associated therewith emotive metadata
having at least one emotive metadata element common with the emotive metadata of the additional media segment, and b) inserting the additional media segment into the media content item after the emotive media segment. A further understanding of the functional and advantageous aspects of the invention can be realized by reference to the following detailed description and drawings.
BRIEF DESCRIPTION OF THE DRAWINGS The embodiments of the present invention are described with reference to the attached figures, wherein:
Figure 1 shows a flow chart illustrating a method of generating metadata associated with a media segment according to a first embodiment of the invention. Figure 2 shows a diagram illustrating a media device for use in playback of media content.
Figure 3 shows a diagram illustrating an emotive metadata schema. Figure 4 shows a flow chart illustrating a method of generating metadata associated with a media segment according to a another embodiment of the invention.
Figure 5 is a diagram illustrating a system including a media device for media playback and a server communicating with the media device.
Figure 6 is a flow chart illustrating a method of providing additional
relevant media segments to a user viewing a first media segment.
Figure 7 is an example of a user interface for presenting and recommending video media content according to an embodiment of the invention. Figure 8 is an example illustrating the use of a user interface to facilitate a transaction based on transaction metadata.
Figure 9 is a flow chart illustrating a method of inserting an emotively relevant media segment into a media content item based on emotive metadata.
DETAILED DESCRIPTION OF THE INVENTION
Generally speaking, the systems described herein are directed to methods and systems of providing relevant media content to users. As required, embodiments of the present invention are disclosed herein. However, the disclosed embodiments are merely exemplary, and it should be understood that the invention may be embodied in many various and alternative forms. The
Figures are not to scale and some features may be exaggerated or minimized to show details of particular elements while related elements may have been eliminated to prevent obscuring novel aspects. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention. For purposes of teaching and not limitation, the illustrated embodiments are directed to systems and methods of presenting media content, particularly media content having emotive
relevance, to media users.
As used herein, the terms, "comprises" and "comprising" are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in this specification including claims, the terms, "comprises" and "comprising" and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps or components.
As used herein, the terms "about" and "approximately, when used in conjunction with ranges of dimensions of particles, compositions of mixtures or other physical properties or characteristics, is meant to cover slight variations that may exist in the upper and lower limits of the ranges of dimensions so as to not exclude embodiments where on average most of the dimensions are satisfied but where statistically dimensions may exist outside this region. It is not the intention to exclude embodiments such as these from the present invention. As used herein, the coordinating conjunction "and/or" is meant to be a selection between a logical disjunction and a logical conjunction of the adjacent words, phrases, or clauses. Specifically, the phrase "X and/or Y" is meant to be interpreted as "one or both of X and Y" wherein X and Y are any word, phrase, or clause. As used herein, the term "media content item" means any form of digital media that may be segmented, including, but not limited to, video, audio, animations, slideshows, electronic text, and combinations thereof. A media content item may be stored in any format, or copies of the same or similar media
file may be stored in multiple formats. Non-limiting examples of media content items include movies, music videos, television show episodes, video presentations, home and amateur videos, video advertisements, songs, audio advertisements, audiobooks, electronic books, and any portion thereof. As used herein, the term "metadata" refers to data associated with a media content item that provides information about or related to the content item. In some embodiments, metadata may include a plurality of metadata elements. Each of the plurality of metadata elements may provide unique descriptive information relating to a content item. An association may be created to link a media content item to its related metadata. Therefore, metadata or metadata files may be provided in a database and searched in order to identify and locate a desired media content item. Accordingly, metadata associated with a media content item may facilitate the electronic delivery of the media content item in a digital format. As used herein, the term "emotive metadata" means metadata related to human behavioral traits. Emotive metadata may relate to emotions displayed or conveyed by the media content and/or may relate to emotions experienced by a media user when playing or viewing the media content. Metadata associated with a media content item may comprise emotive data and other forms of metadata, such as, but not limited to, descriptive metadata, metadata related to the facilitation of a transaction associated with a product or service, and metadata related to the location of a media content item. "Emotive metadata" may also comprise metadata relating to the user including the user's interests, interest
drivers, personal values, personality attributes, and any combination thereof.
As used herein, the term "media segment" means any portion of a media content item for which associated metadata exists. Non-limiting examples of media segments include a video clip within a video file such as a scene from a movie, an audio clip within an audio file such as a chorus of a song, or a portion of an electronic book such as a chapter or a paragraph. Starting and ending points of a media segment may be defined by a user or pre-defined and referenced by a user. A media segment may comprise any portion of a media content item, including the entirety of the media content item. As used herein, the term "user" means any person viewing, reading or otherwise experiencing the playback of a media content item or media segment, for any purpose. In one non-limiting example, a user may be an end consumer experiencing the playback of media for purposes of enjoyment, education, or other forms of media consumption. In another non-limiting example, a user may be person experiencing the playback of media for commercial purposes, such as, but not limited to, analysis media content for the creation of metadata for use in searching and/or insertion and/or development of relevant advertising.
In a first embodiment of the invention, methods and systems are provided for the generation of metadata associated with a media segment of interest or relevance to a user. As defined above, a media segment forms a portion of a media content item such as a digital video or a song. The media segment may be defined by the user by specifying a starting and ending point within the media content item, or alternatively may be identified by selecting a pre-defined
segment of a media content item. The resulting media segment, which relates to a "moment" of relevance or interest to a user, is thus defined by metadata describing the starting and ending points of the media segment, and metadata associated with the user selecting the media segment. These and other scenarios for the selection of a user-relevant media segment are further described below.
Defining media segments annotated with user-relevant metadata provides a number of advantages for more semantic and efficient media searching and retrieval, media content generation, and media content delivery. Most advantageously, the metadata associated with media segments as defined herein enables dramatically improved media content granularity for suggesting and providing relevant media to users. In particular, this enables advertisers to target users with additional media segments having relevant and contextual messaging, and to include transaction metadata in such media segments that enables users to engage immediately in transactions related to products and services.
An exemplary yet non-limiting use of media segments defined according to this embodiment of the invention includes the insertion of additional user- relevant media segments, such as advertisements, based on correlations between the media segment metadata and metadata relating to the additional media segments. An additional non-limiting application of media segments defined according to the present embodiment includes the delivery of additional media content, such as additional media segments having metadata correlated
with metadata associated with a media segment being played by a user. These and other related embodiments are considered in greater detail below.
Figure 1 provides a flow chart illustrating a method of generating metadata relating to a media segment according to the embodiment discussed above, from the perspective of a user. In step 100, at least a portion of a media content item is played by a user on a media device. The portion may include the entirety of the media content item. The user thus views, plays, reads or otherwise experiences the media content item.
The media device may be any device or system capable of displaying, playing and/or presenting media content. Non-limiting examples of media devices include handheld audio and/or video players such as an iPod®, television, smart phones, tablets, kiosks, and a display connected to computer. In non-limiting embodiments, media may be locally provided by the media device and/or local systems operatively connected to the media device, and/or media may be remotely provided to the media device either via a server or through a local system or application.
Figure 2 illustrates a non-limiting example of a media device that may be used by the user according to embodiments of the invention. The media device, shown generally at 150, comprises a media presentation module 160, an input module 170, and a memory and processor module 180. Modules 160-180 may be housed in a single device 190 such as a handheld media device, or may form a system comprising separate devices such as a computer system comprising a computer processor, a monitor and a keyboard and mouse.
Referring again to Figure 1 , the user, while playing the media content item, experiences content of interest or relevancy occurring over a media segment, determines starting and ending points of the media segment in step 105, and provides input defining the starting and ending points in step 110. The starting and ending points may take on many forms according to different aspects of the invention. Non-limiting examples of starting and ending points include time stamps relative to a reference time point associated with the media content item, scene changes, cue points, references to dialogue, and frames within a video. Accordingly, starting and ending points may be determined by the user by referencing such exemplary points. Alternatively, the user may reference pre-determined sections of a media content item, such as chapters in an electronic book or scenes in a movie. In such cases, a single reference by a user to a pre-determined section provides the required information to determine the starting and ending points of the chosen media segment. The starting and ending points provided or inferred based on the user input form metadata that is associated with the media segment in step 115. This metadata is recorded in association with the media segment. In step 120, which is preferably included in the method, the user provides additional metadata relating to the media segment. In step 125, metadata provided by the user or associated with the user is recorded in association with the media segment.
The metadata provided by the user describing the media segment may comprise multiple types of user-specific metadata. In one embodiment, the metadata may comprise a user identifier, such as, but not limited to, a user ID, a
user name, a user icon, a user group, and other forms of user identification known to those skilled in the art. Accordingly, a media segment as so defined provides an indication that the media segment was of relevance to a particular user or user group. Such information may be used to provide search results and/or recommendations to related users or user groups.
For example, if metadata relating to a media segment is generated by a first user, where the metadata includes a metadata element relating to the identity of the first user, a second user may obtain and play the media segment by searching for media segments having such metadata. Unlike prior art methods, this form of media segment generation allows users to search for, or obtain recommendations to, specific and relevant segments within media content items that may serve as launching points for further media content discovery.
Metadata relating to the user may additionally or alternatively include metadata describing the user's interests, interest drivers, personal values, personality attributes, and any combination thereof.
In a preferred embodiment, the user provides metadata relating to a media segment that may comprise metadata describing how or why the media segment is of interest to the user. Preferably, such metadata comprises emotive metadata that is indicative of one or more emotional responses experienced by a user when playing or viewing the media segment, and/or metadata that relates to emotions exhibited by or within the media itself, such as emotions of a character in a movie.
Such emotive metadata preferably relates to a vocabulary enabling
specific emotive terms to be applied as metadata. In a preferred embodiment of the invention, the emotive metadata vocabulary may comprise metadata elements relating to the aspirations, interest drivers, personal values, and emotions. A specific emotive metadata vocabulary is provided in Example 1 below. Figure 3 illustrates specific embodiment of an emotive metadata schema in which emotive metadata may comprise elements relating to self-interest drivers, personality, and emotion.
While the above embodiments have been described in terms of the generation of metadata (relating to a media segment) by an end user or consumer experiencing the media content item, it is to be understood that the scope of the present invention includes other forms of users. For example, a user may be a media owner or other party that generates metadata relating to media segments for commercial purposes, such as to aid in the discovery of media segments within selected media content items that are available for purchase. Alternatively, a user may generate metadata relating to a media segment that is an advertisement or relates to a product or service to aid in the discovery of such a media segment by end users. In yet another non-limiting example, it may be desirable for a media owner to engage, for example through employment or other incentives, users to generate emotive metadata identifying media segments to increase the value and discovery potential of the media content item.
While the aforementioned embodiments have described a method of generation of metadata by a user, the present invention further contemplates a method of generating metadata by providing media content to a user and
receiving input from the user for the purpose of generating user-related metadata defining a media segment.
Figure 4 provides a flow chart illustrating a method of generating and recording user-specific metadata relating to a media segment selected by a user. In step 200, at least a portion of a media content item is provided to a media device for playback. The media content may reside on a local resource such as a local server or database, or may reside on a remote database.
A preferred embodiment is shown in Figure 5, where a media device 150 connects through an internal or external network 300 to communicate with a server 310 running an application program interface (API) 320. The API 320 serves the media device 150 with media content items residing in media content database 340.
Referring again to Figure 4, in step 205, input is received from the user defining the starting and ending points of the media segment. In the preferred embodiment shown in Figure 5, this input is received by API 320. Similarly, in step 210, metadata provided by the user relating to the media segment is received. In step 215, metadata corresponding to the starting and ending positions of the media segment is associated with media segment and stored either in a metadata database (shown at 330 in Figure 5) or appended to the media content item residing in media content database 340. In step 220, the user-specific metadata relating to the media segment is also stored either in metadata database 330 or appended to the media content item residing in media content database 340. Figure 4 provides a specific and non-limiting embodiment
showing a system for practicing an embodiment of the present invention, and those skilled in the art will readily appreciate that other variants of the system architecture may be possible, and these variants are encompassed within the present invention. While the preceding embodiments describe methods for generating metadata relating to media segments based on user input, metadata relating to user-selected media segments may be additionally or alternatively obtained through automated metadata extraction methods. For example, automated processing using techniques such as text, audio and video analysis may be employed to extract metadata from media segments.
In another preferred embodiment, media segments may be by a user without appending user metadata, yet still representing a media segment of interest to a user. Such a media segment, initially having no metadata associated therewith, may be appended with metadata, such as emotive metadata, using automated metadata extraction methods as discussed below. Accordingly, the media segment of interest to the user may be selected by the user, and the metadata corresponding to the media segment selected by the user may be automatically determined and associated with the selected media segment. In one embodiment, a media device may be employed by a user to define a media segment based on sampling of a portion of a media content item being played on a separate media presentation device, and cross-referenced with the full media content item based on automated metadata analysis. For example, a media segment may be defined based on a user viewing a video and using a media
device capable of recording an audio portion of the video being viewed. The user may define a media segment within the audio portion recorded by the media device, and subsequently the media segment may be analyzed by performing metadata extraction of the recorded audio portion of the media segment and correlating the metadata with the full media content item for creation of the full media segment with or without associated metadata.
Preferably, metadata associated with a media segment as defined herein involves the incorporation of metadata from multiple sources, such as through an iterative learning process in which correlations are established between the audiovisual properties of film clips, information extracted from collateral texts and user-provided metadata relating to a media segment.
In a preferred embodiment, the media content item is video content relating to films, which enables the extraction and discovery of semantically richer metadata due to conventions followed by film-makers that aids in the construction of emotive metadata vocabularies. For example, conventions are followed in the way that videos tell stories, albeit to varying degrees, including the use of editing techniques and formulaic plot structures. Knowledge of such 'film grammar1 can be exploited in the development of multimedia information extraction technologies, particularly with regard to metadata vocabularies, for video content relating to film.
In yet another preferred embodiment, a user may share media segments with other users, and additional metadata may be collected relating to the identity, type, and interests of users with whom the media segments are shared.
For example, metadata describing the type of a user (for example, partners, parents, close friends, and work colleagues) with whom a media segment is shared may be recorded in association with a media segment. Additionally, a user's viewing patterns may be monitored to gather metadata relating to preferred sequences of media segments, and this metadata may be recorded to aid in the suggestion and recommendation of additional media segments. In yet another embodiment, additional metadata relating to a media segment may be obtained by searching online resources, for example, websites such as imdb.com to obtain descriptive metadata such as the Title, Year, Genre, Director, Actors, etc. of a video clip. Alternatively, metadata may be obtained by matching popular results for a video clip from YouTube® against a media segment, and leverage comments made about a media segment by YouTube® users for metadata annotation. Within the context of film media segments, lists of famous film quotes could be used to obtain additional metadata by matching them against time-coded subtitles.
Additionally, there is a wide range of collateral texts available for film media content from which some aspects of a film's semantics can be extracted to obtain metadata related to a moment. For example, screenplays, subtitles, audio description, plot summaries, movie reviews may be utilized. Once such texts are temporally-aligned with the media content, free-text search becomes straightforward and provides a powerful way to obtain additional metadata related to a media segment, and/or to search over media segments. In one embodiment, natural language programming algorithms may be employed to
identify metadata (both emotive and non-emotive). Such an algorithm may also be utilized to extract metadata from users' comments relating to media segments. Similarly, metadata may be extracted from audio sources by using a speech-to-text converter and subsequently employing an algorithm such as a natural language processing algorithm for metadata extraction. For example, retrieving media segments featuring a car becomes a matter of looking for 'car1 in the screenplay or audio clip.
In another embodiment of the invention, emotive metadata associated with a media segment being played by a user is used to provide to the user at least one additional media segment having related emotive metadata. This embodiment enables a new manner in which a user may experience media, where media discovery is provided by emotive metadata linking separate media segments that need not belong to a common parent media content item.
Prior art media selection and recommendation methods have focused on providing and recommending media content based on generic descriptive or content-related metadata. The present embodiment provides a completely different media playback and browsing experience for a user by preserving the emotional state or mood associated with the viewing of a media segment. Specifically, the present and inventive method departs from the "more is better" teachings of the prior art, and instead discloses a focused "less is more" approach to focused emotive metadata based recommendation and delivery. Such embodiments provide a potentially rich media playback experience in which an emotion, mood, or feeling associated with the playback of a first media
segment may be at least in part preserved during the playback of an additional and automatically selected media segment. All prior art media selection and recommendation methods known to the inventors fail to deliver such an experience. A preferred method is illustrated generally by the flow chart shown in
Figure 5. In step 400, emotive metadata associated with a media segment being played by a user on a media device is used to search a collection of media segments for a list of media segments, where each matching media segment in the list has at least one common emotive metadata element. This step provides a list of emotively relevant media segments that may be subsequently provided to the user for playback. In step 405, a media segment is selected from the list of matching media segments. The selected matching media segment is provided to the user in step 410 for playback. In a preferred embodiment, one or more additional media segments from the list of matching media segments may be provided to the user after providing a first matching media segment.
A matching media segment for recommendation, delivery and/or playback may be selected using a wide range of criteria. In a preferred embodiment, a matching media segment is selected from the list of matching media segments based on a ranking of relevant emotive metadata matches. For example, a media segment having emotive metadata with the largest number of matches with the emotive metadata associated with the media segment being viewed by the user may be selected. It is to be understood that many other selection rules known to those skilled in the art are within the scope of the present invention. In another
non-limiting embodiment, a matching media segment may be selected randomly from the list of media segments. Preferably, a matching media segment recommended, delivered and/or played according to the present embodiment is a media segment that has not already been played during a current user session. This provision avoids repeated playback of media already recommended and viewed.
The collection of media segments may be located on a local or remote resource or a combination of the two. In one non-limiting example, media segments may be stored and/or cached in a local memory or database resource such as flash memory or a hard drive. In a preferred embodiment, at least a portion of the collection of media segments is stored on a server. Such an embodiment may be illustrated with reference to Figure 5. The collection of media segments may be stored in media content database 340, either with associated metadata appended to the media segments, or with associated metadata stored in a separate (either physically or logically) database such as metadata database 330.
The searching, selection and delivery of additional media segments to the user may be carried out by an application programming interface 320 operating on a server 310, where the server is connected to the media device through a local or remote network. In a preferred embodiment, server 310 is connected to media device 150 through the internet. Media segments may be provided to media device 150 via one of many methods known to those skilled in the art, including, but not limited to, streaming a media segment and uploading a media
segment to the media device for playback. Those skilled in the art will readily appreciate that additional network and/or system elements may used to facilitate the method, including the use of additional devices and services such as web servers and firewalls. The system may further interface with proprietary and/or third party streaming media applications. In one non-limiting embodiment, the system may be configured as or interfaced with a social media application, wherein users may access, discover and share media segments.
The emotive metadata associated with the media segment being played and/or the collection of media segments preferably includes emotive metadata elements that pertain to an emotive response of a user when viewing a given media segment. Alternatively or additionally, the emotive metadata may relate to emotions exhibited by or within the media itself, such as emotions of a character in a movie. As discussed above, the emotive metadata elements may belong to an emotive metadata vocabulary. In a preferred embodiment, media segments are played back to the user on a media device that comprises a user interface. The user interface enables the user to view, read or otherwise playback media segments according to the method disclosed above. A non-limiting example of a user interface for displaying video media content according to one embodiment of the invention is shown in Figures 7 and 8. The user interface 500 comprises a display area where video media content is played back to a user. An optional progress bar is included at 510 that may further include controls relating to playback and volume. The user interface preferably includes metadata information 520 relating to the identity of
the media segment being played, and additional controls 530 relating to additional media playback. The user interface preferably includes a control for requesting the delivery and playback of additional media segments having related emotive metadata according to the embodiment of the invention. Additionally, controls 530 may be included for replaying a selected media segment, sharing a preferred media segment with another user, and adding user- specific metadata related to the clip (in accordance with previously recited embodiments of the present invention). As will be apparent to those skilled in the art, the user interface may be provided on wide range of media presentation devices, for example, a touchscreen device and a display device having an input interface such as a mouse and keyboard.
Preferably, metadata associated with a matching media segment additionally comprises transaction metadata that facilitates a transaction related to a product or service, where the product or service is preferably associated with content within the matching media segment. Those skilled in the art will readily appreciate that transaction-based metadata may be utilized in many different methods to facilitate a transaction.
Non-limiting examples of transaction-based metadata includes metadata linking the user to a transaction such as a web page, email address, or IP address. Specific non-limiting transaction based metadata includes metadata that can be displayed in the user interface as a link (for example, to a web page or telephone number where a user may purchase a product or service and/or initiate a call to inquire for more information regarding a product or service). The
transaction metadata may alternatively be rendered as a button or other selectable item on the user interface. In a preferred embodiment, the matching media segment is a portion of a media content item (such as a movie or song) and the transaction metadata facilitates a transaction by which a user may playback, rent or purchase at least a portion of the media content item (as illustrated in Figure 8). In a preferred embodiment, the matching media segment is a portion of a media content item, and the transaction metadata associated with the media segment allows a user to playback, rent or purchase a media segment that is sequentially related to the media segment currently being viewed, thereby facilitating continued viewing of a media content item.
In another embodiment of the invention, a method is provided for inserting an additional media segment within a media content item, where the media segment has associated emotive metadata that is related to that of a media segment identified within the media content item. This embodiment enables the insertion of emotively relevant content into a media content item.
A preferred method is illustrated generally by the flow chart shown in Figure 9. In step 600, an emotive media segment having emotive metadata is identified within a media content item. The emotive media segment may be identified based on emotive metadata created using, for example, previously disclosed embodiments of the present invention. In one embodiment, the emotive media segment may be obtained by searching within metadata associated with the media content item for a specific emotive metadata element and retrieving thereby obtaining an emotive media segment having associated emotive
metadata including the specific emotive metadata element.
In step 605, a collection of media segments is searched to obtain a list of matching media segments having metadata in which at least one emotive metadata element is common with the metadata relating to the emotive media segment identified in the previous step. In step 610, a matching media segment is selected from the list for insertion. As noted above with reference to a preceding embodiment of the invention, the matching media segment may be selected based on a wide range of criteria. In several non-limiting examples, the matching media segment may be selected based on a ranking of the matches, random selection, or, in the case of the collection of media segments being advertisements, contractual agreements pertaining the insertion frequency and/or priority. Finally, in step 615, the selected matching media segment is inserted into the media content item after the identified emotive media segment. In an alternative embodiment, an additional media segment having associated emotive metadata may be pre-selected for insertion within a media content item. Accordingly, an emotive media segment after which the additional media segment is to be inserted may be identified by searching for a pre-defined media segment within the media content item that has associated metadata including at least one emotive metadata element common with the emotive metadata associated with the additional media segment.
The insertion of the additional media segment may be made prior to playback of the media content item by a user, or dynamically during playback of the media content item by the user. In a preferred embodiment, the emotive
metadata associated with the emotive media segment is user-specific emotive metadata, enabling the insertion of additional media segments targeting the emotive aspects of the user directly.
As discussed with reference to previously disclosed embodiment of the invention, the media content item and/or additional media segments may be stored within a local or remote resource, or a combination of the two. In one non- limiting example, media segments may be stored and/or cached in a local memory or database resource such as flash memory or a hard drive. In a preferred embodiment, either or both of the media content item and media segments are stored on a server. Such an embodiment may be illustrated again with reference to Figure 5. The media content item, and its associated media segments, may be stored in media content database 340, either with associated metadata appended to the media segments, or with associated metadata stored in a separate (either physically or logically) database such as metadata database 330. Additional media segments for insertion may be stored in a common or separate resource.
The identification of an emotive media segment and/or searching, selection and delivery of additional media segments for insertion may be carried out by an application programming interface 320 operating on a server 310, where the server is connected to the media device through a local or remote network. In a preferred embodiment, server 310 is connected to media device 150 through the internet. Media content may be provided to media device 150 via one of many methods known to those skilled in the art, including, but not limited
to, streaming a media segment and uploading a media segment to the media device for playback. Those skilled in the art will readily appreciate that additional network and/or system elements may used to facilitate the method, including the use of additional devices and services such as web servers and firewalls. The system may further interface with proprietary and/or third party streaming media applications.
As noted above, in a preferred embodiment, the media segment inserted within the content item is an advertisement. The insertion of an emotively relevant advertising media segment after a related media segment in a media content item assists in preserving the emotion, or the moment, experienced by a viewer, and thereby delivers a far less disruptive user viewing experience. As a result, the insertion of emotively relevant media segments, particularly those with additional transaction metadata, facilitates improved advertising conversion rates. Preferably, the emotive metadata either in the media segment identified within the media content item, or in the additional media segment to be inserted, is indicative of an anticipated heightened emotional state of a user.
Accordingly, in a preferred embodiment, the metadata associated with the additional media segment to be inserted preferably additionally comprises transaction metadata that facilitates a transaction related to a product or service, where the product or service is preferably associated with content within the inserted media segment. As noted above, those skilled in the art will readily appreciate that transaction-based metadata may be utilized in many different methods to facilitate a transaction. Again, non-limiting examples of transaction-
based metadata includes metadata linking the user to a transaction with a link such as a web page, email address, telephone number or IP address. Specific non-limiting transaction based metadata includes metadata that can be displayed in a user interface as a link to a web page where a user may purchase a product or service and/or inquire for more information regarding a product or service. The transaction metadata may alternatively be rendered as a button or other selectable item a the user interface.
The following example is presented to enable those skilled in the art to understand and to practice the present invention. It should not be considered as a limitation on the scope of the invention, but merely as being illustrative and representative thereof.
EXAMPLE 1
Examples of Emotive Metadata Vocabularies In one embodiment, relevancy tags are grouped into three categories: value profiles, interest drivers, and emotions. Examples of value profile tags include:
• Social - outgoing, extroverted- Emotional - touching, sensitive, feeling
• Reserved - quiet, introverted
• Spontaneous - creative, impulsive • Involving - consensus-oriented, harmonious
• Assertive - in control, decisive
• Independent - individualistic
• Selfless - giving
• Rational - practical, organized
• Conservative - traditional
• Progressive - Innovative
• Examples of interest driver tags include: • prestigious for people
• self-sufficiency, independence, autonomy
• being in control
• appreciation or protection of others or nature
• experience of personal success or achievement • respect and acceptance of our culture, community, traditions
• pleasurable sensuous feelings
• affiliate with others, to fit in, consensus
• greater efficiency, easier life, practicality
• increase safety, security., reduce risk, avoid problems, conflict • fun, excitement, something different
Examples of positive emotion tags include:
• Happy
• Warm Fuzzy
• Curious/Interested • Harmony/Connection
• Proud/Self-Respect
• Gratitude/Relieved
• Jealous/Wishful
• At Peace/Normal
• Surprised/Amazed
• Entertained/Pleased
• Tumed-On • Trust
• Inspired/Encouraged
• Cool/Calm
• Free/Unrestricted
• Confidence • Attraction/Charmed
• Appreciated/Special
• Love
• Eager/Enthusiastic
Examples of negative emotion tags include: • Outraged
• Hate/Repulsed
• Irritated Sceptical
• Intimidated
• Hurt/Upset • Ashamed/Guilt
• Embarrassed
• Aloof/Feel Superior
• Confused
• Worried/Concerned
• Bored
• Shy
• Sad/Depressed • Lonely/Ignored
• Apathetic/Unmoved
• Tired/Worn-out
• Disappointed
• Shocked • Dislike
• Exploited/Ripped-off
The above embodiment describes only a few examples of the various categories and specific elements or relevancy tags that may be used with embodiments of the present invention. As such, the listing of specific examples is not intended to limit the scope of the present invention.
The foregoing description of the preferred embodiments of the invention has been presented to illustrate the principles of the invention and not to limit the invention to the particular embodiment illustrated. It is intended that the scope of the invention be defined by all of the embodiments encompassed within the following claims and their equivalents.
Claims
1. A computer-implemented method of generating metadata in association with a media segment of a media content item, said method comprising the steps of: providing at least a portion of said media content item to a user for playback on a user interface of a media device; receiving input from said user defining a starting point of said media segment and an ending point of said media segment; recording, in association with said media content item, metadata corresponding to said starting and ending points within said media content item and metadata relating to said user.
2. The method according to claim 1 further comprising receiving input from said user providing metadata relating to a content of said media segment, and recording, in association with said media content item, said metadata relating to a content of said media segment.
3. The method according to claim 2 wherein said metadata relating to a content of said media segment comprises at least one emotive metadata element.
4. The method according to any one of claims 1 to 3 wherein said steps are carried out by an application programming interface running on a server.
5. The method according to claim 4 wherein said server is a remote server.
6. The method according to claim 4 wherein media content item is stored in an external media content database.
7. The method according to any one of claims 5 and 6 wherein said metadata is recorded in a metadata database.
8. The method according to any one of claims 1 to 7 wherein said metadata relating to said user comprises one of user identification information, user group identification information, and a combination thereof.
9. The method according to any one of claims 1 to 8 wherein said metadata relating to said user comprises one or more metadata elements corresponding to one of user interests, user interest drivers, user personal values, user personality attributes, and a combination thereof.
10. The method according to any one of claims 1 to 9 wherein said step of receiving an input from said user comprises the steps of: receiving a first input from said user defining a first location within said media content item, said first location corresponding to a starting point of said media segment; and receiving a second input from said user defining a second location within said media content item, said second location corresponding to an ending point of said media segment.
11. The method according to any one of claims 1 to 10 wherein one or more of said starting point and said ending point comprises a time stamp, said time stamp measured relative to a reference time point within said media content item.
12. The method according to any one of claims 1 to 10 wherein said media content item is segmented into a plurality of media content elements, and wherein said starting point comprises a first media content element and said ending point comprises a second media content element.
13. The method according to claim 12 wherein said media content item is a video and wherein said media content elements are video frames.
14. A computer-readable medium having instructions stored thereon that, when executed by a processor, perform a method of generating metadata in association with a media segment of a media content item, said method comprising the steps of: providing at least a portion of said media content item to a user for playback on a user interface of a media device; receiving input from said user defining a starting point of said media segment and an ending point of said media segment; recording, in association with said media content item, metadata corresponding to said starting and ending points within said media content item and metadata relating to said user.
15. A system for generating metadata in association with a media segment of a media content item, the system comprising: an application programming interface running on a server for providing a at least a portion of said media content item to a media device comprising a user interface and for receiving input from said user defining a starting point of said media segment and an ending point of said media segment; and a database for recording, in association with said media segment selected by said user, metadata corresponding to said starting and ending points within said media content item and metadata relating to said user.
16. A computer-implemented method of providing additional media content to a user based on emotive metadata associated with a media segment being played on a user interface of a media device, said method comprising the steps of: a) searching metadata associated with a collection of media segments to obtain a list of matching media segments, wherein each media segment in said list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with said emotive metadata of said media segment being played, b) selecting a matching media segment from said list of matching media segments, provided that said selected matching media segment has not been previously selected during a current user session; and c) providing said selected media segment for playback upon completion of said media segment being played.
17. The method according to claim 16 wherein step a) further comprises: searching a metadata database to obtain a list of matching metadata records, wherein each matching metadata record in said list of matching metadata records comprises at least one emotive metadata element common with said emotive metadata of said media segment being played, and wherein each metadata record in said list of matching metadata records relates to a matching media segment available for playback on said user interface of said media device; and obtaining a list of said matching media segments.
18. The method according to any one of claims 16 and 17 wherein said emotive metadata of said media segment being played and emotive metadata relating to said matching media segments comprise metadata elements belonging to a common emotive metadata vocabulary.
19. The method according to any one of claims 16 to 18 further comprising the step of playing at least one additional media segment from said list of matching media segments upon the completion of the playback of said selected matching media segment.
20. The method according to any one of claims 16 to 19 wherein one or more of said matching media segments are advertisements having emotive metadata associated therewith.
21. The method according to claim 18 wherein said common emotive vocabulary comprises one or more metadata elements identifying an emotion that may be expressed by a character portrayed in a media segment.
22. The method according to claim 18 wherein said common emotive vocabulary comprises one or more metadata elements identifying an emotional response that may be experienced by a user during a playback of a media segment.
23. The method according to claim 18 wherein said common emotive metadata vocabulary comprises one or more metadata elements relating to one or more of interest drivers of a user to which the content of a media segment may relate, personality values of a user to which the content of a media segment may relate, and a combination thereof.
24. The method according to any one of claims 16 to 23 wherein said selected media segment has transaction metadata associated therewith, wherein said transaction metadata facilitates a transaction related to a product or service associated with a content of a given media segment, said method further comprising the step of offering said user an opportunity to select said transaction via said user interface during or after playback of said selected matching media segment.
25. The method according to claim 24 wherein said selected matching media segment comprises a segment of a media content item, wherein said transaction comprises one of purchasing at least a portion of said media content item and purchasing playback rights to at least a portion of said media content item.
26. The method according to any one of claims 16 to 25 wherein said playback of said selected matching media segment is commenced after receiving input from said user providing a request for continued media presentation.
27. The method according to claim 19 wherein playback of each of said at least one additional media segments is commenced after receiving input from said user providing a request for continued media presentation.
28. The method according to any one of claims 16 to 25 wherein said playback of a said selected matching media segment is initiated immediately following the completion of said media segment being played.
29. The method according to any one of claims 16 to 28 wherein said matching media segments are ranked based on the number of matching emotive metadata elements, and wherein a matching media segment having a highest ranking is selected for playback.
30. The method according to any one of claims 16 to 28 wherein said a matching media segment is randomly selected for playback from said list of matching media segments.
31. The method according to any one of claims 16 to 30 wherein said steps are carried out by an application programming interface running on a server.
32. The method according to claim 31 wherein said server is a remote server.
33. The method according to claim 31 wherein said collection of media segments is stored in an external media content database.
34. The method according to claim 31 wherein said metadata associated with a collection of media segments is recorded in a metadata database.
35. A computer-readable medium having instructions stored thereon that, when executed by a processor, perform a method of providing additional media content to a user based on emotive metadata associated with a media segment being played on a user interface of a media device, said method comprising the steps of: a) searching metadata associated with a collection of media segments to obtain a list of matching media segments, wherein each media segment in said list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with said emotive metadata of said media segment being played, b) selecting a matching media segment from said list of matching media segments, provided that said selected matching media segment has not been previously selected during a current user session; and c) providing said selected media segment for playback upon completion of said media segment being played.
36. A system for providing additional media content to a user based on emotive metadata associated with a media segment being played on a user interface of a media device, said system comprising: an application programming interface running on a server for searching metadata associated with a collection of media segments to obtain a list of matching media segments, wherein each media segment in said list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with said emotive metadata of said media segment being played; said application programming interface further programmed to select a matching media segment from said list of matching media segments, provided that said selected matching media segment has not been previously selected during a current user session and to provide said selected media segment for playback on said user interface of said media device; a media content database for storing said collection of media segments; and a metadata database for storing metadata associated with said collection of media segments.
37. A computer-implemented method of inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with said media content item and emotive metadata associated with said additional media segment, said method comprising the steps of: a) identifying an emotive media segment within said media content item, said emotive media segment comprising a portion of said media content item, wherein said emotive media segment has associated therewith emotive metadata; b) searching metadata associated with a collection of media segments for a list of matching media segments, wherein each media segment in said list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with said emotive metadata of said emotive media segment, c) selecting a matching media segment from said list of matching media segments; and d) inserting said selected matching media segment into said media content item after said emotive media segment.
38. The method according to claim 37 wherein said emotive metadata associated with said emotive media segment is indicative of a heightened state of emotional interest experienced by a user when playing said emotive media segment on a media device.
39. The method according to any one of claims 37 and 38 wherein said emotive media segment is identified within said media content item by searching metadata associated with said segment for emotive metadata elements correlated with a segment within said media content item.
40. The method according to any one of claims 37 to 39, wherein said emotive metadata associated with said emotive media segment and said emotive metadata associated with said list of matching media segments comprises metadata elements belonging to a common emotive metadata vocabulary.
41. The method according to claim 40 wherein said common emotive vocabulary comprises one or more metadata elements identifying an emotional response that may be experienced by a user during a playback of a media segment.
42. The method according to claim 40 wherein said common emotive metadata vocabulary comprises one or more metadata elements relating to one or more of interest drivers of a user to which the content of a media segment may relate, personality values of a user to which the content of a media segment may relate, and a combination thereof.
43. The method according to any one of claims 37 to 42 wherein one or more of said matching media segments are advertisements having emotive metadata associated therewith.
44. The method according to any one of claims 37 to 43 wherein said selected media segment has transaction metadata associated therewith, wherein said transaction metadata facilitates a transaction related to a product or service associated with a content of a given media segment by a user.
45. The method according to any one of claims 37 to 44 wherein said steps are carried out by an application programming interface running on a server.
46. The method according to claim 45 wherein said server is a remote server.
47. The method according to claim 45 wherein said collection of media segments is stored in an external media content database.
48. The method according to claim 45 wherein said metadata associated with a collection of media segments is recorded in a metadata database.
49. The method according to any one of claims 37 to 48 wherein said steps are executed during playback of said media content item on a media device.
50. The method according to any one of claims 37 to 49 wherein said selected media segment has transaction metadata associated therewith, wherein said transaction metadata facilitates a transaction related to a product or service associated with a content of a given media segment, said method further comprising the step of offering a user an opportunity to select said transaction via a user interface on said media device during or after playback of said selected matching media segment.
51. A computer-readable medium having instructions stored thereon that, when executed by a processor, perform a method of inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with said media content item and emotive metadata associated with said additional media segment, said method comprising the steps of: a) identifying an emotive media segment within said media content item, said emotive media segment comprising a portion of said media content item, wherein said emotive media segment has associated therewith emotive metadata; b) searching metadata associated with a collection of media segments for a list of matching media segments, wherein each media segment in said list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with said emotive metadata of said emotive media segment, c) selecting a matching media segment from said list of matching media segments; and d) inserting said selected matching media segment into said media content item after said emotive media segment.
52. A system for generating metadata in association with a media segment of a media content item, the system comprising: an application programming interface running on a server for providing a at least a portion of said media content item to a media device comprising a user interface and for receiving input from said user defining a starting point of said media segment and an ending point of said media segment; and a database for recording, in association with said media segment selected by said user, metadata corresponding to said starting and ending points within said media content item and metadata relating to said user.
53. A system for inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with said media content item and emotive metadata associated with said additional media segment, said system comprising: an application programming interface running on a server for identifying an emotive media segment within said media content item, said emotive media segment comprising a portion of said media content item, wherein said emotive media segment has associated therewith emotive metadata; said application programming interface further programmed to search metadata associated with a collection of media segments for a list of matching media segments, wherein each media segment in said list of matching media segments has associated therewith metadata comprising at least one emotive metadata element common with said emotive metadata of said emotive media segment, a media content database for storing said collection of media segments; and a metadata database for storing metadata associated with said collection of media segments.
54. A computer-implemented method of inserting an additional media segment within a media content item based on a relationship between emotive metadata associated with said media content item and emotive metadata associated with said additional media segment, said method comprising the steps of: a) identifying an emotive media segment within said media content item, said emotive media segment comprising a portion of said media content item, wherein said emotive media segment has associated therewith emotive metadata having at least one emotive metadata element common with said emotive metadata of said additional media segment, and b) inserting said additional media segment into said media content item after said emotive media segment.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US20442609P | 2009-01-07 | 2009-01-07 | |
| US61/204,426 | 2009-01-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2010078650A1 true WO2010078650A1 (en) | 2010-07-15 |
Family
ID=42316161
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CA2010/000010 Ceased WO2010078650A1 (en) | 2009-01-07 | 2010-01-07 | Identification, recommendation and delivery of relevant media content |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2010078650A1 (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2013138038A1 (en) * | 2012-03-14 | 2013-09-19 | General Instrument Corporation | Sentiment mapping in a media content item |
| WO2014159783A3 (en) * | 2013-03-14 | 2015-01-29 | General Instrument Corporation | Advertisement insertion |
| US8965915B2 (en) | 2013-03-17 | 2015-02-24 | Alation, Inc. | Assisted query formation, validation, and result previewing in a database having a complex schema |
| US8995822B2 (en) | 2012-03-14 | 2015-03-31 | General Instrument Corporation | Sentiment mapping in a media content item |
| EP2904561A4 (en) * | 2012-10-01 | 2016-05-25 | Google Inc | SYSTEM AND METHOD FOR OPTIMIZING VIDEOS |
| US10812853B2 (en) | 2018-10-23 | 2020-10-20 | At&T Intellecutal Property I, L.P. | User classification using a remote control detail record |
| CN115499704A (en) * | 2022-08-22 | 2022-12-20 | 北京奇艺世纪科技有限公司 | Video recommendation method and device, readable storage medium and electronic equipment |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002013065A1 (en) * | 2000-08-03 | 2002-02-14 | Epstein Bruce A | Information collaboration and reliability assessment |
| US20080059989A1 (en) * | 2001-01-29 | 2008-03-06 | O'connor Dan | Methods and systems for providing media assets over a network |
-
2010
- 2010-01-07 WO PCT/CA2010/000010 patent/WO2010078650A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2002013065A1 (en) * | 2000-08-03 | 2002-02-14 | Epstein Bruce A | Information collaboration and reliability assessment |
| US20080059989A1 (en) * | 2001-01-29 | 2008-03-06 | O'connor Dan | Methods and systems for providing media assets over a network |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8995822B2 (en) | 2012-03-14 | 2015-03-31 | General Instrument Corporation | Sentiment mapping in a media content item |
| EP2826254A1 (en) * | 2012-03-14 | 2015-01-21 | General Instrument Corporation | Sentiment mapping in a media content item |
| WO2013138038A1 (en) * | 2012-03-14 | 2013-09-19 | General Instrument Corporation | Sentiment mapping in a media content item |
| US9106979B2 (en) | 2012-03-14 | 2015-08-11 | Arris Technology, Inc. | Sentiment mapping in a media content item |
| EP2904561A4 (en) * | 2012-10-01 | 2016-05-25 | Google Inc | SYSTEM AND METHOD FOR OPTIMIZING VIDEOS |
| US10194096B2 (en) | 2012-10-01 | 2019-01-29 | Google Llc | System and method for optimizing videos using optimization rules |
| US11930241B2 (en) | 2012-10-01 | 2024-03-12 | Google Llc | System and method for optimizing videos |
| WO2014159783A3 (en) * | 2013-03-14 | 2015-01-29 | General Instrument Corporation | Advertisement insertion |
| EP2954672A4 (en) * | 2013-03-14 | 2016-10-19 | Arris Entpr Inc | ADVERTISING INSERTION |
| US9497507B2 (en) | 2013-03-14 | 2016-11-15 | Arris Enterprises, Inc. | Advertisement insertion |
| KR101800098B1 (en) * | 2013-03-14 | 2017-11-21 | 제너럴 인스트루먼트 코포레이션 | Advertisement insertion |
| US8996559B2 (en) | 2013-03-17 | 2015-03-31 | Alation, Inc. | Assisted query formation, validation, and result previewing in a database having a complex schema |
| US8965915B2 (en) | 2013-03-17 | 2015-02-24 | Alation, Inc. | Assisted query formation, validation, and result previewing in a database having a complex schema |
| US9244952B2 (en) | 2013-03-17 | 2016-01-26 | Alation, Inc. | Editable and searchable markup pages automatically populated through user query monitoring |
| US10812853B2 (en) | 2018-10-23 | 2020-10-20 | At&T Intellecutal Property I, L.P. | User classification using a remote control detail record |
| CN115499704A (en) * | 2022-08-22 | 2022-12-20 | 北京奇艺世纪科技有限公司 | Video recommendation method and device, readable storage medium and electronic equipment |
| CN115499704B (en) * | 2022-08-22 | 2023-12-29 | 北京奇艺世纪科技有限公司 | Video recommendation method and device, readable storage medium and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230325437A1 (en) | User interface for viewing targeted segments of multimedia content based on time-based metadata search criteria | |
| JP6342951B2 (en) | Annotate video interval | |
| Gao et al. | Vlogging: A survey of videoblogging technology on the web | |
| US7533091B2 (en) | Methods, systems, and computer-readable media for generating a suggested list of media items based upon a seed | |
| CN101981563B (en) | Method and apparatus for selecting relevant content for display in conjunction with media | |
| TWI514171B (en) | System and methods for dynamic page creation | |
| US20130218942A1 (en) | Systems and methods for providing synchronized playback of media | |
| CN107087225B (en) | Using closed captioning streams for device metadata | |
| US20100088327A1 (en) | Method, Apparatus, and Computer Program Product for Identifying Media Item Similarities | |
| US10013704B2 (en) | Integrating sponsored media with user-generated content | |
| CN103097987A (en) | System and method for providing video clips, and the creation thereof | |
| JPWO2006019101A1 (en) | Content-related information acquisition device, content-related information acquisition method, and content-related information acquisition program | |
| US9418141B2 (en) | Systems and methods for providing a multi-function search box for creating word pages | |
| WO2010078650A1 (en) | Identification, recommendation and delivery of relevant media content | |
| US20140317099A1 (en) | Personalized digital content search | |
| Pinto et al. | YouTube timed metadata enrichment using a collaborative approach | |
| Riley | The revolution will be televised: Identifying, organizing, and presenting correlations between social media and broadcast television | |
| Chorianopoulos et al. | Social video retrieval: research methods in controlling, sharing, and editing of web video | |
| Kaiser et al. | Metadata-based Adaptive Assembling of Video Clips on the Web | |
| Galuščáková | Information retrieval and navigation in audio-visual archives | |
| Kim et al. | iFlix | |
| Profumo | Can Music Be Personalized? |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10729071 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 10729071 Country of ref document: EP Kind code of ref document: A1 |