HK1189079A - Method for presenting a media program - Google Patents
Method for presenting a media program Download PDFInfo
- Publication number
- HK1189079A HK1189079A HK14102104.3A HK14102104A HK1189079A HK 1189079 A HK1189079 A HK 1189079A HK 14102104 A HK14102104 A HK 14102104A HK 1189079 A HK1189079 A HK 1189079A
- Authority
- HK
- Hong Kong
- Prior art keywords
- media
- reaction
- user
- program
- module
- Prior art date
Links
Description
Technical Field
The invention relates to determining a future portion of a currently presented media program.
Background
Currently, advertising and media providers often test advertisements and other media programs before the programs are widely distributed. For example, a media provider may show a situation comedy to few viewers, who then provide feedback through survey results or manually tracked information logs. However, these surveys and logs are often inaccurate. For example, the viewer may not remember an interesting joke at the 3 rd minute in a 24 minute program. Moreover, even if the results include some degree of accuracy, the number of viewers is often small, which may not reliably indicate how well the program is being widely distributed.
The media provider may also test the media program by intrusive biometric testing of the viewer during presentation of the media program in a controlled environment. This test may be more accurate, but the audience population is often even much less than survey and log tests. Moreover, even this test may be very inaccurate, due in part to the controlled environment in which it is conducted — a person is less likely to laugh when binding an electrical test device in a sound room than when he or she is at home.
Also, in either of these cases, the time delay in changing programs can be significant. Recording a new program or changing the current program may take days or weeks and even if this is done, the changed program may be tested again, further delaying the distribution of the program.
Disclosure of Invention
Techniques and apparatuses for determining a future portion of a currently presented media program are described herein. The techniques and apparatus may receive a current media reaction of one or more people to a currently presented media program and determine a later portion of the media program to present based on the media reaction. For example, in some embodiments, a program may be presented live, reactions may be received during the live presentation, and the program may be altered in progress (on-the-fly) and in real-time based on those reactions. Moreover, the changes may be general or customized to a group of people or a specific person.
This summary is provided to introduce simplified concepts for determining future portions of a currently presented media program that are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Drawings
Embodiments of techniques and apparatuses for determining a future portion of a currently presented media program are described with reference to the following figures. In the drawings, like reference numerals are used to refer to like features and components:
FIG. 1 illustrates an example environment in which techniques for determining future portions of a currently presented media program, as well as other techniques, may be implemented.
FIG. 2 is an illustration of an example computing device local to the viewer of FIG. 1.
FIG. 3 is an illustration of an example remote computing device that is remote to the viewer of FIG. 1.
FIG. 4 illustrates an example method for determining media reaction based on passive sensor data.
Fig. 5 shows a time-based plot of media responses, which are interest levels for one user and for 40 time periods during the presentation of a media program.
FIG. 6 illustrates an example method for constructing a reaction history.
FIG. 7 illustrates an example method for presenting an advertisement based on a current media reaction, including by determining which of a plurality of potential advertisements to present.
Fig. 8 illustrates a current media reaction to a media program over a portion of the program as the program is being presented.
FIG. 9 illustrates an example method for presenting an advertisement based on a current media response, including based on bids from advertisers.
FIG. 10 illustrates the advertising module of FIGS. 2 and 3 communicating information to a plurality of advertisers over the communication network of FIG. 3.
FIG. 11 illustrates a method for presenting an advertisement based on a current media reaction, including a scene immediately following the current media reaction.
Fig. 12 illustrates a method for determining a future portion of a currently presented media program, including based on a current media reaction of a user, the current media reaction determined based on sensor data passively sensed during presentation of the media program to the user.
Fig. 13 illustrates the remote device of fig. 3, wherein the demographics, a portion of the reaction history, the current media reaction, and information about the media program are received from the computing device of fig. 2 and/or 3.
Fig. 14 illustrates a method for determining a future portion of a currently presented media program, including when the future portion is responsive to an explicitly requested media reaction.
FIG. 15 illustrates a method for determining a future portion of a currently presented media program, including media reactions based on multiple users.
Fig. 16 illustrates an example device in which techniques for determining a future portion of a currently presented media program, as well as other techniques, may be implemented.
Detailed Description
Overview
Techniques and apparatuses for determining a future portion of a currently presented media program are described herein. The techniques and apparatus allow for alteration or determination of portions of a media program during presentation of the media program.
For example, consider a situation comedy show being presented to thousands of viewers. Assume that the media provider of this situation comedy prepares in advance a number of parts to be presented at a certain point in the situation comedy-3 different scenes at 19 minutes and 4 different end scenes at the end of the program. The techniques may determine which of the 3 scenes to present at the 19 th minute and which of the 4 different scenes to present at the end based on the media reaction during presentation. Which scenes are presented may be based on many media responses to previous scenes, such as media responses from thousands of viewers, or responses based on one person or a demographic group. By doing so, the techniques may change the program for each person or customize the program for a group or particular person. Thus, the techniques may present scenes with a physical comedy to the population at the 19 th minute based on the reaction of men aged 18-34 years to previous scenes with a physical comedy, present scenes showing the development of a character to the population based on the reaction of women aged 35-44 years to previous scenes with the character, and display one of four possible endings to all viewers based on various reactions from all the populations currently viewing the program.
This is merely one example of how techniques and/or apparatuses for determining a future portion of a currently presented media program may be performed. Where the context allows, techniques and/or apparatus are referred to herein individually or collectively as "techniques". Turning now to an example environment in which the techniques may be embodied, then to example methods that are capable of (but not required to) working with the techniques. Some of these various methods include methods for sensing a reaction to media, constructing a reaction history for a user, and presenting an advertisement based on a current reaction. Following these various approaches, the present document turns to example approaches for determining a future portion of a currently presented media program.
Example Environment
FIG. 1 is an illustration of an example environment 100 for receiving sensor data and determining media reactions based on the sensor data. These media responses may be used to determine future portions of the currently presented media program, among other uses. The techniques may use these media responses, alone or in combination with other information, such as demographics, response history, and information about the media program or a portion thereof.
The environment 100 includes a media presentation device 102, an audience sensing device 104, a status module 106, an interest module 108, an interface module 110, and a user interface 112.
The media presentation device 102 presents media programs to a viewer 114 having one or more users 116. Media programs may include, alone or in combination, television programs, movies, music videos, video clips, advertisements, blogs, photographs, web pages, electronic books, electronic magazines, computer games, songs, tweets (tweets), or other audio and/or video media. The audience 114 may include one or more users 116 located in: the location allows for consumption of the media program presented by the media presentation device 102 and measurements made by the audience sensing devices 104, whether separately or within a group of viewers 114. Three users are shown in viewer 114: user 116-1, user 116-2, and user 116-3.
The audience sensing devices 104 are capable of sensing the audience 114 and providing sensor data of the audience 114 to the status module 106 and/or the interest module 108 (the sensor data 118 is shown as being provided via an arrow). The sensed data may be sensed passively, actively, and/or in response to an explicit request.
Passively sensed data is passive by not requiring active participation by those users when measuring them. The actively sensed data includes data recorded by a user in the audience (such as a handwritten log) and data sensed from the user by a biometric sensor worn by the user in the audience. The sensor data sensed in response to an explicit request may be actively or passively sensed. One example is advertising, where the advertising requests that a user hold his or her hands up during the advertisement if he or she would like to send a coupon for a free sample of the product to the user by mail. In this case, the user is expressing a reaction to raise a hand, however this may be sensed passively by measurements that do not require the user to actively participate in the reaction. The technique senses this lifted hand in several ways as described below.
The sensor data may include data sensed using light emitted by the audience sensing device 104 or other signals transmitted, such as with an infrared sensor that bounces emitted infrared light off of a user or audience space (e.g., a sofa, a wall, etc.) and senses returned light. Examples of ways to measure sensor data of a user and to measure sensor data are provided in more detail below.
The audience sensing devices 104 may or may not process the sensor data prior to providing the sensor data to the status module 106 and/or the interest module 108. Thus, the sensor data may be or include raw or processed data such as: RGB (red, green, blue) frames; an infrared data frame; depth data; heart rate; a respiration rate; head orientation or movement of the user (e.g., three-dimensional coordinates x, y, z and three angular pitch (pitch), tilt (tilt), and yaw (yaw)); face (e.g., eyes, nose, and mouth) orientation, movement, or occlusion; orientation, movement, or occlusion of the skeleton; audio, which may include information indicating an orientation sufficient to determine from which user the audio originated or directly indicating which user or what was spoken (if someone is speaking); a thermal reading sufficient to determine or indicate the presence and location of one of the users 116; and distance from the audience sensing devices 104 or the media presentation devices 102. In some cases, the audience sensing devices 104 include infrared sensors (webcams, Kinect cameras), stereo or directional audio microphones, and thermal readouts (plus infrared sensors), although other sensing means may also or instead be used.
The status module 106 receives the sensor data and determines a status 120 (shown at arrow) of the user 116 in the audience 114 based on the sensor data. The states include: for example: sadness, talking, nausea, fear, smiling, frowning, calm, surprise, anger, laughing, screaming, clapping, shaking hands, cheering, looking away, leaning towards … …, sleeping, or away, to name a few.
The talk state may be a general state indicating that the user is talking, however it may also include subcategories based on voice content, such as speaking about the media program (talk related) or speaking not related to the media program (talk unrelated). The status module 106 may determine which category of speech to speak via speech recognition.
Based on the sensor data, the status module 106 may also or instead determine the number of users, the identity and/or demographic data of the users (shown at 122), or the engagement during the presentation (shown at 124). The identity indicates a unique identity of one of the users 116 in the audience 114, such as Susan Brown. The demographic data categorizes one of the users 116, such as 5 feet 4 inches tall, children, and male or female. Engagement indicates whether the user may be focusing on the media program, such as based on the user's presence or head orientation. In some cases, engagement may be determined by the status module 106 with sensor data having a lower resolution or less processed than the sensor data used to determine the status. Even so, participation may still be useful in measuring audience, whether by itself or for determining the interests of the user using the interests module 108.
The interest module 108 determines a user's interest level 130 (shown at arrow) for a media program based on the sensor data 118 and/or the user's engagement or status (shown at arrow with engagement/status 126) and information about the media program (shown at arrow with media type 128). Interest module 108 may determine that, for example, a number of laugh states for a media program intended as a serious drama indicate a low interest level and, in turn, a number of laugh states for a media program intended as a comedy indicate a high interest level.
As shown in fig. 1, status module 106 and/or interest module 108 provides demographics/identities 122 and one or more of the following media responses: participation 124, status 120, or interest level 130, are all shown at the arrows in fig. 1. Based on one or more of these media responses, status module 106 and/or interest module 108 may also provide another type of media response, i.e., a total media response type to the media program, such as a rating (e.g., thumbing up or samsung). However, in some cases, the media reaction is instead received by the interface module 110 and the overall media reaction is determined.
The status module 106 and interest module 108 may be local to the viewer 114, and thus local to the media presentation device 102 and the audience sensing device 104, although this is not required. An example embodiment in which the status module 106 and the interest module 108 are local to the viewer 114 is shown in FIG. 2. However, in some cases, the status module 106 and/or the interest module 108 are remote to the viewer 114, which is shown in fig. 3.
The interface module 110 receives the media responses and demographic/identity information and determines or receives some indication as to which media program or which portion thereof the responses relate to. The interface module 110 presents the media reaction 132 to the media program or causes the media reaction 132 to the media program to be presented through the user interface 112, although this is not required. This media reaction may be any of the media reactions mentioned above, some of which are presented in a time-based graph, by an avatar showing the reaction, or a video or audio of the user recorded during the reaction, one or more of which are effective on how the user's reaction is during the associated media program.
The interface module 110 may be located locally with respect to the audience 114, such as in the case where a user is viewing his or her own media response or a family member's media response. However, in many cases, the interface module 110 receives the media reaction from a remote source.
Note that the sensor data 118 may include the context in which the user is reacting to the media or the current context of the user for whom a rating or recommendation of the media is requested. Thus, the audience sensing devices 104 may sense that a second person is in the room or otherwise physically near the first person, which may be the first person's context. The context may also be determined in other ways as described below in fig. 2.
Fig. 2 is an illustration of an example computing device 202 local to viewer 114. The computing device 202 includes, or has access to, the media presentation device 102, the audience sensing device 104, one or more processors 204, and a computer-readable storage medium ("CRM") 206.
The CRM 206 includes an operating system 208, a status module 106, an interest module 108, media programs 210 (each of the media programs 210 may include or have associated program information 212 and portions 214), an interface module 110, a user interface 112, a history module 216, a reaction history 218, an advertising module 220 (which may include a plurality of advertisements 222), and a portion module 224.
Each of the media programs 210 may have, include or be associated with program information 212 and portions 214. The program information 212 may indicate the title, episode, author or artist, genre, and other information of the program, including information about portions within each media program 210. Thus, the program information 212 may indicate that one of the media programs 210 is a music video, includes a harmony portion repeated 4 times, includes a verse (verse) portion, includes portions based on each visual presentation during the song, such as the artist singing, the vocal accompaniment singer dancing, the title of the music video, the artist, the year of manufacture, resolution and formatting data, and so forth.
A portion 214 of one of the media programs 210 constitutes the program or potentially can be used to constitute the program. These portions may represent particular time ranges in the media program, however they may instead be located in the program based on the previous end of the portion (even though the time at which the end of the portion was not necessarily preset). Example portions may be 15 second long segments, songs playing in radio-like programs, or scenes of movies. The portions 214 may be arranged and/or disposed in a particular order, in which case one or more of the portions 214 may be replaced by a portion module 224 in response to a media reaction. Alternatively, the portions 214 may be prepared in advance but not in a preset order. Thus, for example, a media program (such as a 30 second long commercial) may have a first 10 second portion previously set, but have a second 10 second portion of 5 alternates, and a third 10 second portion of 15 alternates. In this case, which portion to play from the 11 th to 20 th second may be based on the person's media reaction to the first 10 second portion. A third portion to be played from the 21 st to the 30 th seconds is then determined based on the user's (or perhaps multiple users') reaction to one or both of the first and second portions.
As noted in the above section, the portion module 224 receives one or more current media responses of a user, a group of users, or perhaps multiple users to a portion of one of the media programs 210. These media reactions may include one or more of engagement 124, status 120, and interest level 130. From these media reactions, the portion module 224 may determine future portions of the currently presented media program to present. Note that this determination may be performed in real-time during the presentation of the media program, and may even be used to determine future portions of the short advertisement based on current reactions to earlier portions of the same presentation of the advertisement. These future portions may be stored locally or remotely in advance. The future portion to be presented may be received from a local storage or from a remote source, such as temporarily by streaming a later portion of the currently presented media program from the remote source. As shown in fig. 2 and 3, media program 210, portion 214, and portion module 224 may be located locally or remotely to computing device 202, and thus locally or remotely to one or more users having media reaction (e.g., user 116-1 of viewer 114 of fig. 1).
The history module 216 includes a reaction history 218 or has access to the reaction history 218. The history module 216 may construct and update a reaction history 218 based on ongoing reactions of the user (or others as noted below) to the media program. In some cases, the history module 216 determines the respective context of the user, however this may in turn be determined and received from other entities. Thus, in some cases, the history module 216 determines the time, the location, the weather of the location, and so forth during the user's reaction to the media program or request for a rating or recommendation of the media program. The history module 216 may determine ratings and/or recommendations for media based on the user's current context and the reaction history 218. As noted elsewhere herein, the reaction history 218 may be used with media reactions to determine future portions of a media program to present.
The advertising module 220 receives a user's current media reaction, such as one or more of engagement 124, status 120, or interest level 130. From this current media reaction, the advertisement module 220 may determine an advertisement of the plurality of advertisements 222 to present to the user. The advertising module 220 may also or instead provide the current media response to the advertiser, receive a bid from the advertiser for the right to present the advertisement, and then cause the advertisement to be presented to the user. This advertisement may have been previously stored as one of the advertisements 222 or temporarily received, such as streaming the advertisement from a remote source in response to the accompanying bid being the highest bid or another bid structure indicating that the advertisement should be presented. Note that in any of these cases, the advertising module 220 may be local or remote to the computing device 202 (and thus to the user (e.g., user 116-1 of viewer 114 of FIG. 1)).
Note that in this illustrated example, the entities including the media presentation device 102, the audience sensing device 104, the status module 106, the interest module 108, the interface module 110, the history module 216, the advertising module 220, and the section module 224 are included within a single computing device (such as a desktop computer with a display, a front-facing camera, a microphone, an audio output, etc.). However, each of these entities may be separate from or integrated with each other in one or more computing devices or otherwise. As will be described in part below, the media presentation device 102 may be integrated with the audience sensing device 104 but separate from the status module 106, the interest module 108, the interface module 110, the history module 216, the advertising module 220, or the portion module 224. Also, each of these modules may operate on separate devices or be combined in one device.
As shown in fig. 2, the computing devices 202 may each be one or a combination of various devices, here shown in six examples: a laptop computer 202-1, a tablet computer 202-2, a smart phone 202-3, a set-top box 202-4, a desktop computer 202-5, and a gaming system 202-6, although other computing devices and systems, such as televisions with computing capabilities, netbooks, and cellular telephones, may also be used. Note that three of these computing devices 202 include the media presentation device 102 and the audience sensing device 104 (laptop 202-1, tablet 202-2, smartphone 202-3). One device does not include but is in communication with the media presentation device 102 and the audience sensing device 104 (desktop computer 202-5). Two other devices do not include the media presentation device 102 and may or may not include the audience sensing device 104, such as in the case where the audience sensing device 104 is included within the media presentation device 120 (set top box 202-4 and gaming system 202-6).
Fig. 3 is an illustration of an example remote computing device 302 that is remote to viewer 114. FIG. 3 also shows a communication network 304 by which the remote computing device 302 communicates with the audience sensing devices 104 (not shown, but implemented within or in communication with the computing device 202), the interface module 110, the history module 216 (with or without the reaction history 218), the advertisement module 220 (with or without the advertisement 222), and the portion module 224, assuming these entities are in the computing device 202 as shown in FIG. 2. The communication network 304 may be the internet, a local area network, a wide area network, a wireless network, a USB hub, a computer bus, another mobile communication network, or a combination of these.
The remote computing device 302 includes one or more processors 306 and a remote computer-readable storage medium ("remote CRM") 308. The remote CRM 308 includes a status module 106, an interest module 108, media programs 210 (each of the media programs 210 may include or have associated program information 212 and/or portions 214), a history module 216, a reaction history 218, an advertisement module 220, an advertisement 222, and a portions module 224.
Note that in this illustrated example, the media presentation device 102 and the audience sensing device 104 are physically separate from the status module 106 and the interest module 108, with the first two operating locally to an audience viewing the media program and the second two operating remotely. Thus, sensor data is communicated from the audience sensing devices 104 to one or both of the status module 106 or interest module 108, which may be transmitted locally (FIG. 2) or remotely (FIG. 3). Moreover, after being determined by status module 106 and/or interest module 108, the respective media responses and other information may be communicated to the same or other computing devices 202 for receipt by interface module 110, history module 216, advertising module 220, and/or portion module 224. Thus, in some cases, a first one of the computing devices 202 may measure sensor data, transmit the sensor data to the remote device 302, and then the remote device 302 transmits the media response to another one of the computing devices 202, all over the network 304.
These and other capabilities, as well as the manner in which the entities of fig. 1-3 act and interact, are set forth in greater detail below. These entities may be further divided, combined, and the like. The environment 100 of fig. 1 and the detailed illustrations of fig. 2 and 3 show some of many possible environments in which the described techniques can be employed.
Example method
Determining media response based on passive sensor data
Fig. 4 depicts a method 400 of determining media response based on passive sensor data. These and other methods described herein are illustrated as sets of blocks that specify operations performed, but are not necessarily limited to the orders shown for performing the operations of the respective blocks. In portions of the following discussion, reference will be made to the environment 100 of fig. 1 and entities illustrated in detail in fig. 2-3, the reference being made thereto for exemplary purposes only. The techniques are not limited to being performed by one entity or multiple entities operating on one device.
Block 402 senses or receives sensor data of a viewer or user that is passively sensed during presentation of a media program to the viewer or user. This sensor data may include the context of the viewer or user, or a context that is received separately.
For example, consider the following: the audience includes all three users 116 of FIG. 1: users 116-1, 116-2, and 116-3. Assume that the media presentation device 102 is an LCD display having speakers and through which the media programs are presented, and that the display is in communication with the set top box 202-4 of fig. 2. Here the audience sensing device 104 is a front facing high resolution infrared sensor, a red-green-blue sensor, and two microphones capable of sensing sound and position integrated with the set top box 202-4 or the media presentation device 102. Assume also that the media program 210 being presented is an animated movie rated PG, named "superman family" (incorporated) and streamed from a remote source and through the set top box 202-4. The set top box 202-4 presents a family of superman with 6 ads, with the intervals occurring at the beginning of the movie-one, three in three ad blocks, two in one two ad block.
Receiving sensor data for all three users 116 in the audience 114; for this example, consider first user 116-1. It is assumed here that: during the superman family process, the audience sensing device 104 measures the following at various times for the user 116-1 and then provides the following at block 402:
at time 1, the head is oriented 3 degrees, no audio or low amplitude audio.
At time 2, the head is oriented 24 degrees with no audio.
At time 3, the skeleton moves (arm), high amplitude audio.
At time 4, the skeleton moves (arms and body), high amplitude audio.
Time 5, head movement, facial feature change (20%), medium amplitude audio.
At time 6, detailed face orientation data, no audio.
At time 7, the skeleton is oriented (missing), with no audio.
Time 8, face orientation, breathing rate.
Block 404 determines a status of the user during the media program based on the sensor data. In some cases, block 404 determines a probability of the state, or a plurality of probabilities of a plurality of states, respectively. For example, block 404 may determine that a state may be correct, but not completely deterministic (e.g., 40% of the chance users are laughing). Block 404 may also or instead determine, based on the sensor data, that multiple states are possible (such as sad or calm states) and the probability of each state (e.g., sad state 65%, calm state 35%).
Block 404 may also or instead determine demographics, identity, and/or engagement. Also, the method 400 may skip block 404 and proceed directly to block 406, as described later below.
In the example made, the status module 106 receives the sensor data listed above and determines the following corresponding status of the user 116-1:
time 1: and (4) looking at the direction.
Time 2: the eyes were removed.
Time 3: clapping the hands.
Time 4: and (4) cheering.
Time 5: and (5) laughing.
Time 6: smiling.
Time 7: and (4) leaving.
Time 8: the patient is asleep.
At time 1, the status module 106 determines that the status of the user 116-1 is looking at the media program based on sensor data indicating that the head of the user 116-1 is 3 degrees off looking directly at the LCD display and a rule indicating that the "looking at" status applies to less than 20 degrees off (as an example only). Similarly, at time 2, the status module 106 determines that the user 116-1 is looking away because the deviation is greater than 20 degrees.
At time 3, the status module 106 determines that the user 116-1 is clapping his hand based on sensor data indicating that the user 116-1 has skeletal movement of his arm and high amplitude audio. The status module 106 may distinguish claps from other statuses (such as cheering) based on the type of arm movement (not indicated above for simplicity). Similarly, at time 4, the status module 106 determines that the user 116-1 is cheering due to arm movement and high amplitude audio attributable to the user 116-1.
At time 5, the status module 106 determines that the user 116-1 is laughing based on sensor data indicating that the user 116-1 has head movement, 20% facial feature changes, and medium amplitude audio. Various sensor data may be used to distinguish between different states, such as screech based on audio being of medium amplitude rather than high amplitude, and facial feature changes, such as opening of the mouth and raising of both brows.
For time 6, the audience sensing devices 104 process the raw sensor data to provide processed sensor data, and in this case, perform facial recognition processing to provide detailed facial orientation data. In conjunction with the absence of audio, the status module 106 determines that the detailed face orientation data (here, raised mouth corners, amount of eye lid coverage) indicates that the user 116-1 is smiling.
At time 7, the status module 106 determines that the user 116-1 is away based on sensor data indicating that the user 116-1 has skeletal movement away from the audience sensing device 104. The sensor data may also indicate this directly, such as in the case where the audience sensing device 104 does not sense the presence of the user 116-1 (either by having no skeleton or head readings, or no longer receiving a thermal signature).
At time 8, status module 106 determines that user 116-1 is asleep based on sensor data indicating that the facial orientation of user 116-1 has not changed for a certain period of time (e.g., the user's eyes have not blinked), and a steady, slow breathing rate.
These eight sensor readings are a simplified example for purposes of explanation. The sensor data may include extensive data as noted elsewhere herein. Also, sensor data may be received that measures the audience once every fraction of a second, thereby providing detailed data for tens, hundreds, and thousands of time periods during the presentation of the media program, and from which status or other media responses may be determined.
Returning to method 400, block 404 may determine demographics, identity, and engagement in addition to the status of the user. The status module 106 may determine or receive sensor data, determine demographics and identities from the sensor data, or receive demographics or identities from the audience sensing devices 104. Continuing the ongoing example, the sensor data of the user 116-1 may indicate that the user 116-1 is John Brown, the user 116-2 is Lydia Brown, and the user 116-3 is Susan Brown. Alternatively, for example, the sensor data may indicate that the user 116-1 is 6 feet, 4 inches tall and male (based on skeletal orientation). The sensor data may be received with or include information indicative of portions of the sensor data that may be attributable to each of the users in the audience, respectively. However, in this current example, assume that the audience sensing device 104 provides three sets of sensor data, where each set of sensor data indicates the identity of the user along with the sensor data.
Still at block 404, the techniques may determine the participation of the audience or users in the audience. As noted, this determination may not be as fine as the determination of the user's status, but is still useful. Assume for the above example that sensor data is received for user 116-2(Lydia Brown), and that this sensor data includes only head and skeleton orientation.
At time 1, the head is oriented 0 degrees and the skeleton is oriented with the upper torso in front of the lower torso.
At time 2, the head is oriented 2 degrees and the skeleton is oriented with the upper torso in front of the lower torso.
At time 3, the head is oriented 5 degrees and the skeleton is oriented with the upper torso approximately flush with the lower torso.
At time 4, the head is oriented 2 degrees and the skeleton is oriented up the torso behind the lower torso.
At time 5, the head was oriented 16 degrees and the skeleton was oriented with the upper torso behind the lower torso.
At time 6, the head was oriented 37 degrees and the skeleton was oriented with the upper torso behind the lower torso.
At time 7, the head is oriented 5 degrees and the skeleton is oriented with the upper torso in front of the lower torso.
At time 8, the head is oriented 1 degree and the skeleton is oriented with the upper torso in front of the lower torso.
The status module 106 receives this sensor data and determines the following corresponding participation of Lydia Brown:
time 1: the participation is high.
Time 2: the participation is high.
Time 3: participate in medium-high.
Time 4: participate in the middle.
Time 5: participate in medium-low.
Time 6: the participation is low.
Time 7: the participation is high.
Time 8: the participation is high.
At times 1, 2, 7, and 8, the status module 106 determines that Lydia is highly participating in the superman family at these times based on sensor data indicating that the head of the user 116-2 is 5 degrees or less off-center from direct view to the LCD display and the skeletal orientation of the upper torso before the lower torso (indicating that Lydia is leaning forward in the media presentation).
At time 3, the status module 106 determines that the participation level of Lydia has dropped because Lydia is no longer leaning forward. At time 4, the status module 106 determines that Lydia engagement has further decreased to moderate based on Lydia leaning backward, even though she is still looking almost directly at the superman family.
At times 5 and 6, the status module 106 determines that Lydia is less engaged, falls to medium-low, and then falls to low engagement based on Lydia still leaning backward and looking slightly away (16 degrees) and then looking significantly away (37 degrees), respectively. Note that at time 7, Lydia quickly returns to high participation, which may be of interest to the media creator because it indicates what is considered exciting or otherwise influential.
The method 400 may proceed directly from block 402 to block 406 or from block 404 to block 406 or block 408. If proceeding from block 404 to block 406, the technique determines the interest level based on the type of media being presented and the user's participation or status. If proceeding from block 402 to block 406, the technique determines the interest level based on the type of media being presented and the sensor data of the user without first or independently determining the user's engagement or status.
Continuing the above example for users 116-1 and 116-2, assume that block 406 receives the state determined by state module 106 for user 116-1(John Brown) at block 404. Based on the state of John Brown and information about the media program, the interest module 108 determines the level of interest (overall or over time) for the superman family. It is assumed here that the superman family is an adventure program and a comedy program, and that parts of the movie are marked as having one of these media types. Although simplified, assume that times 1 and 2 are marked as comedy, times 3 and 4 are marked as adventure, times 5 and 6 are marked as comedy, and times 7 and 8 are marked as adventure. Revisiting the state determined by the state module 106, consider again the following:
time 1: and (4) looking at the direction.
Time 2: the eyes were removed.
Time 3: clapping the hands.
Time 4: and (4) cheering.
Time 5: and (5) laughing.
Time 6: smiling.
Time 7: and (4) leaving.
Time 8: the patient is asleep.
Based on these states, the state module 106 determines: for time 1, John Brown has a medium-low interest in the content of time 1-if this is an adventure or drama type, the status module 106 may determine that John Brown is instead highly interested. Here, however, since the content is comedy and thus is intended to evoke laughter or the like, the interest module 108 determines that John Brown has medium-low interest at time 1. Similarly, for time 2, interest module 108 determines that John Brown has low interest at time 2 because his status is not just laughing or smiling but looking away.
At times 3 and 4, the interest module 108 determines that John Brown has a high level of interest based on the type of adventure and the status of clapping and cheering at these times. At time 6, based on the comedy type and John Brown smile, he is determined to have moderate interest at this time.
At times 7 and 8, interest module 108 determines that John Brown has very low interest. The media type is here a hazard, however in this case the interest module 108 would determine that John Brown's level of interest is very low for most types of content.
It is readily seen that advertisers, media providers, and media creators may benefit from knowing a user's participation or interest level. It is assumed here that the level of interest for the superman family is provided over time, as well as demographic information about John Brown. Using this information from multiple demographically similar users, the media creator may learn that adult males are interested in certain adventure content, but that most comedy parts are uninteresting, at least for this demographic group.
As a more detailed example, consider fig. 5, fig. 5 shows a time-based plot 500 having an interest level 502 of 40 time periods 504 over a portion of a media program. It is assumed here that the media program is a movie that includes other media programs, advertisements, at time slots 18 through 30. As shown, interest module 108 determines that the user started from a medium interest level and then bounced between medium and medium-high, and very high interest levels until time period 18. During the first advertisement covering time periods 18 through 22, interest module 108 determines that the user has a medium low interest level. However, for time periods 23 through 28, interest module 108 determines that the user has a very low level of interest (because, for example, he is looking away and talking or leaving the room). However, for the last advertisement covering time slots 28 through 32, interest module 108 determines that the user has an intermediate level of interest for time slots 29 through 32, the majority of the advertisement.
This may be valuable information-left for the first advertising user, left for the middle ad and the beginning user of the last ad, and returned (with moderate interest) for the majority of users of the last ad. This resolution and accuracy of interest is contrasted with certain conventional methods, which may not provide information about how many of the movie viewers actually viewed the advertisement, which advertisements were viewed, and in what amount of interest. If this example is a common trend for viewing the masses, the price of the advertisement in the middle of the block may be decreased, while other advertisement prices may be adjusted. Alternatively, advertisers and media providers may learn to play short blocks of advertisements, for example, with only two advertisements. Interest level 502 also provides valuable information about portions of the movie itself, such as by a very high interest level at time period 7 (e.g., a particularly influential scene of the movie) and a diminished interest at time periods 35-38.
Note that in some cases, the participation level, while useful, may be less useful or accurate compared to the status and interest level. For example, if the user's face is occluded (blocked) and thus not watching the media program, the status module 106 may determine that the user is not engaged only for the engagement level. If the user's face is occluded by the user's hand (skeletal orientation) and the audio indicates high volume audio, the status module 106 may determine that the user is screaming when determining the status. In combination with the content being terrorist or suspensory, screaming status indicates a very high level of interest. This is just one example of a situation where the level of interest may be significantly different from the level of engagement.
As indicated above, method 400 may proceed directly from block 402 to block 406. In this case, interest module 108, alone or in combination with status module 106, determines the interest level based on the type of media (including multiple media types for different portions of the media program) and the sensor data. As an example, for sensor data of John Brown at time 4, which data indicates skeletal movement (arm and body), and high amplitude audio, as well as comedy, sports, conflict-based talk show, adventure-based video game, twitter, or horror types, interest module 108 may determine that John Brown has a high level of interest at time 4. Conversely, for the same sensor data at time 4, interest module 108 may determine that John Brown has a low level of interest at time 4 for an episode, a comedy, or classical music. This may be performed based on the sensor data without first determining the engagement level or state, however the determination may also be performed.
Following block 404 or 406, block 408 provides demographics, identity, engagement, status, and/or interest level. The status module 106 or interest module 108 may provide this information to various entities, such as the interface module 110, the history module 216, the advertisement module 220, and/or the portion module 224, among others.
Providing this information to advertisers after presenting an advertisement (determining the media reaction in the advertisement) may effectively enable advertisers to measure the value of the advertisements they have displayed during the media program. Providing this information to the media creator may effectively enable the media creator to evaluate the potential value of similar media programs or portions thereof. For example, prior to releasing the media program to the public, the media creator may determine the portion of the media program that is not reacting well and thereby modify the media program to improve it.
Providing this information to the rating entity may effectively enable the rating entity to automatically rate the media program for the user. Some other entity, such as a media controller, may use this information to improve media control and presentation. For example, the local controller may pause the media program in response to all users in the audience leaving the room.
Providing media responses to the history module 216 may effectively enable the history module 216 to construct and update a response history 218. The history module 216 may construct the reaction history 218 based on one or more contexts in which each set of media reactions to the media program is received, or the media reactions may be accounted for in whole or in part in context. Thus, the context of the media response in which the user watched the television program after work on a wednesday night may be altered to reflect that the user may be tired from work.
As noted herein, the techniques may determine multiple states of a user during most media programs, even for 15 seconds of advertising or video clips. In this case, block 404 is repeated, such as over a one second period.
Moreover, the status module 106 may determine not only multiple statuses of the user over time, but also various different statuses at particular times. For example, a user may be both laughing and looking away, both of which are states that may be determined and provided or used to determine the user's level of interest.
Further, either or both of status module 106 and interest module 108 may determine engagement, status, and/or interest levels based on historical data as well as sensor data or media types. In one case, the user's historical sensor data is used to normalize the user's engagement, status, or interest level (e.g., dynamically for the current media reaction). For example, if Susan Brown is watching a media program and receives her sensor data, the techniques may normalize or otherwise learn how to best determine her engagement, status, and interest level based on her historical sensor data. If the historical sensor data of Susan Brown indicates that she is not a particularly expressive or talking user, the techniques may be adjusted for this history. Thus, a lower amplitude audio may be sufficient to determine that Susan Brown laughed, as compared to a higher amplitude audio that is typically used to determine that the user laughed.
In another case, the historical engagement, status, or interest level of the user for whom the sensor data was received is compared to the historical engagement, status, or interest level of others. Thus, based on data indicating that Lydia Brown exhibits high interest in almost every media program she watches as compared to the interest levels of others (either generally or for the same media program), a lower interest level for Lydia Brown may be determined. In any of these cases, the techniques learn over time and may thus normalize participation, status, and/or interest levels.
Method for constructing a reaction history
As noted above, the techniques may determine the user's participation, status, and/or interest level in various media programs. Moreover, these techniques may use passive or active sensor data to do so. Using these media responses, the techniques may construct a user's response history. This reaction history can be used in various ways as described elsewhere herein.
FIG. 6 depicts a method 600 for constructing a reaction history based on a user's reaction to a media program. Block 602 receives a user's reaction set sensed during presentation of a plurality of respective media programs and information about the respective media programs. An example reaction set for media programs is shown in fig. 5, showing a measure of interest level over time of presentation of the program to a user.
The information about the respective media programs may include, for example, the name of the media (e.g., office, 104 th collection) and its genre (e.g., song, television program, or advertisement), as well as other information described herein.
In addition to the media reactions and their corresponding media programs, block 602 may also receive a user's context during presentation of the media programs as described above.
Further, block 602 may receive media responses from other users, which are used to construct a response history. Thus, the history module 216 may determine other users having similar reactions to those of the user based on the user's media reactions (in part, or after constructing an initial or preliminary reaction history for the user). The history module 216 may determine other people who have reactions similar to the user's reaction and refine the user's reaction history using those other people's reactions to programs that the user has not seen or heard.
Block 604 constructs a reaction history for the user based on the user's reflection set and information about the corresponding media program. As noted, block 604 may also use the reaction history, context, etc. of others to construct the user's reaction history. This reaction history may be used elsewhere herein to determine programs that a user may enjoy, advertisements that may be effective when displayed to a user, and other purposes noted herein.
Method for presenting advertisements based on current media reactions
As noted above, the techniques may determine a user's current media reaction, such as engagement, status, and/or interest level. The following method is directed to how the current media reaction can be used to determine the advertisement to present.
Fig. 7 depicts a method 700 for presenting an advertisement based on a current media reaction, including by determining which of a plurality of potential advertisements to present.
Block 702 receives a current media reaction of a user to a media program that is currently being presented to the user. The current media reaction may be of various types and in various media, such as laughing up due to a scene of a comedy, cheering up due to a sporting event at a live sporting event, dancing with a song or music video, being interrupted during a drama, intentionally watching a commercial of a movie, or talking to another person in a room who is also watching a news program, to name a few. The media program is the program that is currently being presented to a user (such as user 116-1 of fig. 1) rather than historical media responses, however, in addition to the most recent current media response, a history of responses or other current media responses made earlier during the same media program may be used.
By way of example, considering fig. 8, fig. 8 shows the current media reaction to a comedy program (office, 104 th episode) on a portion of the program as the program is being presented (shown at time-based state diagram 800). There are shown 23 media reactions 802 that the advertisement module 220 receives from the status module 106 and the status of the user named ameliaPond. For visual simplicity, time-based graph 800 shows only four states: laugh (use)Show), smile (use)Show), is interested in (by)Shown), and departed (shown with an "X").
Block 704 determines a determined advertisement of the plurality of potential advertisements based on the current media reaction to the media program. Block 704 may determine which advertisement to display and when to show the advertisement based on the current media response, as well as other information, such as the user's media history (e.g., the response history 218 of Amelia pool of fig. 2), the context of the current media response (e.g., the location of the Amelia pool is sunny or she has just come home from school), the user's demographics (e.g., 16 year old female, in seattle, washington, english), the type of media program (e.g., comedy), or another user's media response that is also in the audience (e.g., the brother of Amelia pool reacts in some way). Block 704 may determine which advertisement to show next to the current media reaction (such as the media reaction showing the last scene before the advertisement in the program), however block 704 may also use the current media reaction that does not immediately precede the advertisement or use multiple current media reactions, such as the last 6 media reactions, and so on.
Continuing with the ongoing embodiment, assume that the current media reaction is reaction 804 of fig. 8, where Amelia Pond is laughing for the current scene of the program office. Also assume that at the end of the scene (which ends within 15 seconds), a first commercial block 806 begins. This first advertisement block 806 is one minute long and is scheduled to include two 30 second advertisements: one is ad number 1 808 and the other is ad number 2 810.
It is also assumed that: for this case, the first advertiser has previously purchased the right for advertisement # 1 808, and for this point has previously provided three different potential advertisements, one of which will play based on the current media reaction. Thus, advertising module 220 first ascertains that there are 3 potential advertisements in both advertisements 222 of FIG. 2 or 3, and ascertains which is appropriate. Here the advertiser knows in advance that the program is an office and 104 th episode. It is assumed that this program is being viewed for the first time and thus that other media reactions of other users have not been recorded for the entire program. However, based generally on information about the program, one advertisement is indicated as being eligible to play if the current media reaction is laughing or smiling, one advertisement is indicated as being eligible to play if the reaction is away, and another advertisement is indicated as being eligible to play for all states. Assuming that the advertiser is a large car manufacturer and that the first advertisement (for laughter or smile) is for an interesting, fast racing car, the second advertisement is repeated and audio focused because it will be played when the user has left the room, stating the advantages of the manufacturer (e.g., Desoto car is fast, Desoto car is fun, Desoto car is valuable), hope that the user is within listening distance of the advertisement, and the third advertisement is for a popular and sensitive home car.
Note that this is a relatively simple case of using the current media reaction and is based in part on the genre or general information about the program. An advertiser may instead provide 20 advertisements for many current media responses and demographics about the user and the user's response history. Thus, the advertising module 220 may determine that 5 of the 20 advertisements are likely appropriate based on the user being a male between the ages of 34 and 50, and thereby exclude various automobiles sold by the manufacturer that are generally under-selling for men of this age group. The advertising module 220 may also determine that 2 of the 5 advertisements are more appropriate based on the user's reaction history indicating that he has a positive reaction to fishing and racing shows, and thus shows trucks and sports utility vehicles. Finally, the advertising module 220 may determine which of the 2 advertisements to present based on the user's current media response indicating that the user is highly engaged in the program and thus displaying the truck's advertisement, which shows the details of the truck under the assumption that the user is paying sufficient attention to enjoy those details instead of the less detailed, more stylized advertisement.
Block 706 causes the determined advertisement to be presented during the current presentation time period in which the media program is presented, or immediately after the media program completes being presented. Block 706 may cause the determined advertisement to be presented by presenting the advertisement or by indicating to a presenting entity (such as the media presentation device 102 of fig. 2) that the determined advertisement should be presented. The current presentation time period is an amount of time sufficient to present the media program, but may also include an amount of time sufficient to present a previously determined number of advertisements or an amount of time to present an advertisement.
Summarizing the ongoing example regarding Amelia Pond, consider again fig. 8. Here the advertising module 220 causes the media presentation device 102 of fig. 2 to present a first advertisement for an interesting, fast racing car based on the Amelia's current media response being laughter.
The advertising module 220 may base its determination on media responses other than a most recent media response, whether those responses are current to the media program or a current presentation time period of other programs (such as those on which the user's reaction history is based). The current media reactions may also be those received for reactions during the current presentation time period, but not for the program. Thus, the user's reaction to a prior advertisement shown in an advertisement block within the current presentation time period may also be used to determine which advertisement to present.
Method 700 may be repeated and thus advertisement # 2 810 may be selected based at least on the "state of interest" shown at advertisement reaction 812. Thus, method 700 may be repeated for each advertisement within the current presentation time period and for the current reactions, whether those reactions are for a program or for an advertisement.
Other advertising reactions are also shown: a secondary ad reaction 814, a tertiary ad reaction 816 for ad No. 3 818 for a secondary ad block 820, and a fourth ad reaction 822 for ad No. 4 824. Note that determining that the third advertisement to be presented by the advertisement module 220 is based in part on the away state 826, and determining that the third advertisement to be presented is based on the user laughing against the third advertisement. These are just some of the many examples of how current media responses may be used by the technology to determine an advertisement to present.
Optionally, the techniques may determine pricing for advertisements based on current media reactions to the media program. Thus, the advertisement may cost less if the user is currently away and more if the user is currently laughing or otherwise participating. The techniques can then set a price for the advertisement based on the media response, including presenting the advertisement independent of the advertiser's bid. In this case, the techniques may present the advertisement based on the advertiser agreeing or having agreed to the price (rather than the highest bid structure, or some combination of bids and determined pricing). One example of a bid and a determined pricing is a starting price set by the techniques based on the media response and subsequent bids from advertisers based on the starting price.
Also optionally, the techniques may enable a user to explicitly interact with an advertisement. For example, an advertisement may include an explicit request for a requested media reaction to facilitate the offer. Thus, the detailed truck ad may include text or audio that requests the user to hold up his or her hands to send a detailed sales brochure to the user's email or home address, or the ad for the pizza take out chain may request the user to cheer to get an 1/2 discount on a take out home pizza during the currently playing soccer game. If the user lifts his or her hand, the technique may communicate this status to the associated advertiser, who may then send back the phone number of the user's local store to be displayed within the advertisement along with the 1/2 discount code for the pizza.
FIG. 9 depicts a method 900 for presenting an advertisement based on a current media response, including based on bids from advertisers.
Block 902 provides the advertiser with the user's current media response to the media program currently being presented to the user. Block 902 may provide the received or determined current media reaction in various manners described above, such as by status module 106, interest module 108, and/or advertising module 220. Block 902 may also provide other information, such as a reaction history of the user or portions thereof, demographic information about the user, a context in which the media program is presented to the user, or information about the media program.
For example, considering FIG. 10, FIG. 10 shows advertising module 220 providing demographics 1002, a portion of the reaction history 1004, the current media reaction 1006, and information about the media program 1008 to advertisers 1010 (shown as including first, second, and third advertisers 1010-1, 1010-2, and 1010-3, respectively) over communication network 314.
Assume here that demographics 1002 indicate that the user is a female 33 years old, married and born with one. It is also assumed that this portion 1004 of the reaction history indicates the identity of the user (i.e., Melody Pond) and her preferences for science fiction programs, Olympic games, and prior positive reactions to movie trailers, shoe sales, and triathlons advertisements. It is assumed here that the current media reaction 1006 indicates disappointment (sad state) and the information on the media program 1008 indicates that the program is a swimming game, where the last section where the current media reaction is sad state is shown in the international swimming game in Michael Phelps second after the australian swimmer Ian tang.
Block 904 receives a bid from an advertiser for a right to present a respective advertisement to a user and for a current time period in which a current media advertisement is presented. This right may be to present the advertisement immediately, such as just after the scene or segment of the current media reaction is completed and before another advertisement is displayed. This right may instead be for a later portion of the current presentation time period, such as a second advertisement after the scene or an advertisement in a block after, for example, 5 minutes.
Consider the above example in which the user has a sad status just prior to displaying an advertisement. Some advertisers will be less interested in presenting advertisements to users with this status and therefore will bid less on the right to display their advertisements, while other advertisers consider their advertisements more effective for people with a sad status. Also, advertisers may consider the user's demographics, reaction history, and which programs they are watching and assign values based on these. For example, an advertiser selling life insurance or investment planning may bid high on programs directly after a sad state, and rights to have children, as compared to an advertiser selling carpet cleaning products.
For this example, assume that all three advertisers 1010 bid on the right to display an advertisement, and for each bid, include information sufficient for the advertisement module 220 to cause the advertisement to be presented, such as having an indicator of the advertisement for the advertisement 222 or a uniform resource locator where to retrieve the advertisement.
Block 906 causes one of the advertisements associated with one of the bids to be presented to the user during a current presentation time period in which the media program is presented. Block 906 may select to display the advertisement in response to determining which bid level, however the highest bid is not necessarily required. At the end of this example, advertising module 220 causes the advertisement associated with the highest bid to be presented to the user.
In addition to the approaches listed above, the techniques may provide a number of additional presence users during the presentation of the media program, including in some cases their current media reactions, etc., thereby potentially increasing the laughter of the bid.
Also, the advertising module 220 can receive a media reaction to the advertisement shown and, based on the reaction, reduce or increase the cost of the advertisement relative to bids placed for the advertisement.
Method 900 may be repeated in whole or in part for subsequent advertisements, including based on current media reactions to previous advertisements, similar as described in the example of method 700.
FIG. 11 depicts a method 1100 for presenting an advertisement based on a current media reaction, including a scene immediately following the current media reaction.
Block 1102 determines a determined advertisement of a plurality of potential advertisements based on a current media reaction to a scene of a media program presented to a user, a genre of the media program, and a reaction history associated with the user. This step may be performed in the manner as set forth above. Note that advertisers may set bids or pre-pay based on their advertisement being presented after a certain type of reaction, such as placing 5 points of money for each advertisement following a laugh reaction. Also, if the advertisements are not placed for each user, but are placed generally or by group (e.g., for people in a certain geographic area), the opposing bids or advances may be weighted based on the percentage of positive reactions, or the like.
Block 1104 causes the determined advertisement to be presented upon completion of presentation of the scene of the media program.
Method for determining a future portion of a currently presented media program
As noted above, the techniques may determine a user's current media reaction, such as engagement, status, and/or interest level. The following method is directed to how a current media reaction may be used to determine a future portion to be presented during a currently presented media program.
Fig. 12 depicts a method 1200 for determining a future portion of a currently presented media program, including based on a current media reaction of a user, the current media reaction determined based on sensor data passively sensed during presentation of the media program to the user.
Block 1202 presents, during presentation of a media program to a user, a current media reaction of the user to a portion of the media program, the media reaction determined based on sensor data passively sensed during the presentation.
As noted in detail elsewhere herein, the current media reaction may be of various types, and in response to various media, such as laughing up due to a scene of a comedy, cheering up due to a sports match of a live sports event, dancing with a song or music video, being interrupted during an episode, intentionally watching a commercial of a movie, or talking to another person in the room who is also watching a news program, to name a few. The media program is the program that is currently being presented to a user (such as user 116-1 of fig. 1) rather than a previously presented media program and thus the reaction is a historical media reaction. However, a reaction history based on historical media reactions may be used in conjunction with the current media reaction to determine the future portion. Also, other current media reactions made earlier during the same media program may be used in addition to or instead of the most recent media reaction.
The media reaction is current as received during the current presentation of the media program, but need not be received immediately or instantaneously, or even the most current media reaction to the media program. Thus, the current media reaction to the fourth portion of the media program may be received during the sixth portion and used to determine the fifteenth portion to be presented in the media program.
By way of example, considering fig. 13, fig. 13 shows remote device 302, with portion module 224 implemented on remote device 302, remote device 302 receiving demographic 1302, a portion of the reaction history 1304, current media reaction 1306, and information about media programs 1308 from computing device 202 of fig. 2. The section module 224 receives this data over the communication network 304 and, in response, causes the computing device 202 to present the particular future section of the media program to the user associated with this data.
Also by way of example, consider fig. 8, fig. 8 shows the current media reaction to a comedy program (office, 104 th episode) on a portion of the program as the program is being presented (shown at time-based state diagram 800). Although in fig. 8 music and 23 media reactions 802 are shown for 23 sections, for this example media reactions 828, 830 and 832 are considered, media reactions 828, 830 and 832 represent smile states at sections 14, 15 and 16. It is assumed here that there are 3 current media responses (of which media response 832 is the most current) and that 17 th through 23 th portions have not yet been presented. Also assume that the demographics 1302 indicates that the person watching the office is a female aged 23, that this portion 1304 of the reaction history indicates that the person generally dislikes comedies but likes science fiction movies and main dramas, that the current media reaction 1306 includes the 3 smile states indicated above, and that the information on the media program 1308 indicates that the program is office, 104 th episode, and that the current media reaction is to the media reactions of portions 14, 15 th, and 16 th.
Block 1204 determines a future portion of the media program for presentation to the user based on the media reaction and the portion, the future portion of the media program occurring later in the media program than the portion. In making this determination, the portion module 224 may receive sufficient information or may use this information to obtain additional information. Thus, assume that the information 1308 about the media program indicates the 3 portions, and the portion module 224 determines that the portions are for a scene that develops the role of Pam in the drama, but that the scene is not a joke or is intended to be comedy in contrast. Based on the person's reaction (smiling) and the topic of the sections (role development of Pam), the section module 224 can make a decision between each possible scene to be displayed, for example, at the end of the program. The portion module 224 may base this determination also on other information, as indicated in fig. 13. Thus, the portion module 224 may determine that a 23 year old woman who does not like comedy overall but who is smiling all the time during the scene for Pam enjoys more another character development scene than a scene with a humorous body where the character named Dwight falls off the carriage. Here section 214 includes two possible future sections to be shown at the end of the office (here at section 23): one for the truck drop for the character and one for Pam.
Block 1206 causes a future portion of the media program to be presented to the user during the current presentation of the media program. The portion module 224 may act locally or remotely and may indicate or provide the portion to be presented. Thus, the portion module 224 may cause the future portion to be presented by communicating the content portion or indication 1310 to the computing device 202 over the communication network 304. If an indication is received, portion module 224 may select from various previously stored portions that are stored locally at computing device 202 or accessible by computing device 202, and based on the indication.
To summarize the ongoing example, assume that the remote device 302 of fig. 13 is streaming the media program through the set top box 202-4 and thus streaming a scene about Pam instead of a scene about Dwight at section 23.
Although the above example of method 1200 relates to a single user, media responses of other users may also be used, including others physically local to the user (e.g., watching in the same room as the 23 year old female user). Also, other users that do not view with the user, such as other members of the same demographic group (e.g., women aged 18-34) or general audiences (e.g., everyone receiving their view of the media reaction received during the first show in the eastern standard time zone in the united states and canada) may be used.
Note that the media responses of this user and other users may be received and used in real-time to determine future portions of the currently presented media program. Thus, the program can be customized on-the-fly and in real time for the person, thereby improving the quality of the program. In this example, the media program is customized based on the previously prepared portions, however this is not required. Live programs may also be altered in real time, such as live late-night comedy shows that choose to execute comic episodes based on good reaction to previous comic episodes (skits) presented earlier in the program.
Fig. 14 depicts a method 1400 for determining a future portion of a currently presented media program, including when the future portion is responsive to an explicitly requested media reaction.
Block 1402 presents or causes to be presented, during a media program, an explicit request for a requested media reaction, the explicit request being part of the media program and indicating a response to the requested media reaction, the requested media reaction being a physical change to a user. The media reaction may be one or more of those described, such as holding hands, cheering, smiling, and so forth.
Moreover, the explicit request may be presented as part of and within the media program. Thus, the advertisement may have been constructed as a portion of an advertisement text or an interpreter requesting the user to raise his hands to schedule a test drive of the car; or a memorial-show, whether live or recorded, may include the presenter requesting the viewer to cheer or hist a character to decide which character to leave on the program; or suspense a movie may cause a character in the movie to ask the user whether they should run away, hide, or struggle with a baddie.
Alternatively, the explicit request may be presented but not part of or within the media program, such as by a pop-up window overlaid on the media program.
The response itself may be similar to that indicated above for an advertisement, such as a coupon or offer (offer) of information about a product or service, whether in an advertisement or a non-advertising media program.
The response may also or instead include presenting a different portion of the program later in the program. A testimonial show may explicitly request a media reaction to present more content about a character or situation, such as "please wave one hand if you want to see more about Ginger helping the homeless adventure, please wave both hands if you want to see more about Bart's travel to a bike store, or please cheer if you want to see more about Susie and Ginger fighting about Bart". In this example, the response has three parts (or can be considered as three responses), one for each media reaction (here, Ginger's adventure, Bart's travel, or Susie's fight).
Block 1404 receives the requested media reaction sensed during the presentation and commensurate with the explicitly requested presentation. The techniques may receive a requested media response from another entity or determine the media response based on sensor data (passive or otherwise). In one embodiment, block 1404 is performed by status module 106. The status module 106 determines the requested media reaction based on sensor data that is passively sensed during, at, or immediately following presentation of the explicit request and measurement of physical changes to the user.
In response to receiving the requested media response, block 1406 executes the response. Optionally or additionally, prior to performing the potential response at block 1406, the method 1400 may determine at block 1408 that the requested media responses of the other users are also received and base the potential response on the requested media responses of the other users.
In this case, prior to performing the response, the portion module 224 may determine that other requested media reactions of other users were also received during other presentations of the media program. Thus, the portion module 224 may base the response on the media reaction of the other user, such as presenting the portion based on the media reaction of the user and the other user requesting to display a fight for Susie. The media responses of other users may be for all users, users of the same demographic group, friends of the user (whether or not viewing with the user in the room), or family of the user (e.g., those in the room that also respond to the explicit request).
Also optionally or additionally, the method 1400 may proceed to block 1410. Block 1410 requests another requested media reaction to cause another response to be performed for other users associated with the user. This may be presented as a subsequent explicit request, such as a request for the user to raise his or her hand to also send a coupon to the user's friend.
The request may relate to both the user and his or her friends who are remotely viewing. Thus, the user may choose to view Susie's contest, but after making the media response, the section module 224 presents a second request asking whether the user wants to turn to view the content that the user's friend Lydia previously or simultaneously requested to view (i.e., Ginger's adventure), or most of her friends requested to view (such as 5 of the user's 8 friends having chosen to view more about Bart's travel).
In response to receiving the second requested media reaction, block 1412 causes the response to also be presented to the other user. The portion module 224 may do so directly upon remote operation or may communicate with a remote entity to cause the entity to present the response to other users. To summarize the example, assume that the user chooses to view Lydia-her best friend-choosing the content viewed (i.e., Ginger's adventure) so that she and Lydia can discuss it when they are school on a day. Note that the user also knows that most of her other friends choose to watch Bart's travel, and thus she will know to ask them whether they like it. If her friend says Bart's trip is good, the user can re-watch the program and choose to turn to watching Bart's trip.
FIG. 15 depicts a method 1500 for determining a future portion of a currently presented media program, including media reactions based on multiple users.
Block 1502 receives media responses of a plurality of users from the plurality of media presentation devices, at a remote entity, and during presentation of a media program to the users by the plurality of media presentation devices, the media responses based on passively sensing sensed sensor data at the plurality of media presentation devices and during a portion of the media program. The media program may be presented live, simultaneously, or off-line with the plurality of users. As shown in fig. 13, the current media reaction 1306 may be received alone or with other information as indicated above, such as demographics 1302, portions of the reaction history 1304, and information on media programs 1308, however in this case from multiple computing devices 202.
Block 1504 determines a future portion of the media program for presentation to the user based on the media reaction and the portion, the future portion of the media program occurring later in the media program than the portion. As shown in fig. 13, this media program may be stored remotely, such as media program 210 of fig. 3 and 13, or locally, such as shown in fig. 2. Also as noted above, other information may also be used in this determination.
The media program may be one of many of those noted above, such as an advertisement. In this case, the portion module 224 and/or the advertisement module 220 may determine the future portion based on it being more likely to succeed than one or more other previously prepared portions in the selectable portion set (e.g., portion 214 in fig. 13). Thus, a user population showing adverse reactions to a first section listing details about a real estate company can be used to determine a third section that appears simpler or more fashionable rather than continuing to describe the real estate company in detail. A number of other examples are set forth above.
Block 1506 causes future portions of the media program to be presented to the users at the plurality of media presentation devices and during presentation of the media program. Block 1506 may do so in various ways as described in detail above, such as streaming from the remote device 302 to multiple users in real-time and through multiple computing devices 202.
In some embodiments, the media responses of multiple users may be used to determine the manner in which future programs are created or which previously prepared future programs are to be presented. Consider the following: the media provider has 10 time slots for adventure television series. Assume that the first three programs have some internal parts that can be altered based on the technique, but 11 sets are prepared for the next 7 time slots (e.g., weeks in a season). The television season is typically structured such that the entire season is prepared in advance, thereby making changes in large seasons difficult to perform. The media provider may be able to prepare additional complete programs at the time the episode of the season is prepared. Thus, the media provider can determine that a particular character is very interesting to viewers based on the media reactions from multiple users during the first three episodes and on portions of those episodes. By doing so, an episode focusing on the character rather than others may be displayed.
Also, or in addition, some previously prepared episodes may have multiple sets of scenes that may be presented, and thus the episodes may be customized to the audience for these media reactions (either generally or for various groups). In this way, the media reaction may be used to determine a future portion of the media program even when the change is not in real-time.
The foregoing discussion describes, among other methods and techniques, methods related to determining a future portion of a currently presented media program. Aspects of these methods may be implemented in hardware (e.g., fixed logic circuitry), firmware, software, manual processing, or any combination thereof. A software implementation represents program code that performs specified tasks when executed by a computer processor. The example methods may be described in the general context of computer-executable instructions, which may include software, applications, routines, programs, objects, components, data structures, procedures, modules, functions, and the like. The program code can be stored in one or more computer-readable memory devices, both local and/or remote to a computer processor. The method may also be implemented by multiple computing devices in a distributed computing mode. Furthermore, the features described herein are platform-independent and may be implemented on a variety of computing platforms having a variety of processors.
These techniques may be embodied on one or more of the entities shown in fig. 1-3, 10, 13, and 16 (device 1600 is described below), which may be further divided, combined, and so on. Accordingly, these figures illustrate some of the many possible systems or devices capable of employing the described techniques. The entities in these figures generally represent software, firmware, hardware, entire devices or networks, or a combination thereof. For example, in the case of a software implementation, the entities (e.g., status module 106, interest module 108, interface module 110, history module 216, advertisement module 220, and portion module 224) represent program code that performs specified tasks when executed on a processor (e.g., processors 204 and/or 306). The program code can be stored in one or more computer-readable memory devices, such as CRM 206 and/or remote CRM 308 or computer-readable storage medium 1614 in fig. 16.
Example apparatus
Fig. 16 illustrates various components of an example device 1600 that can be implemented as any type of client, server, and/or computing device described with reference to previous fig. 1-15 to implement techniques for determining future portions of a currently presented media program. In various embodiments, device 1600 may be implemented as one or a combination of wired and/or wireless devices, such as any form of a television mobile computing device (e.g., a television set-top box, a Digital Video Recorder (DVR), etc.), a consumer device, a computer device, a server device, a portable computer device, a user device, a communication device, a video processing and/or rendering device, an appliance device, a gaming device, an electronic device, a system on a chip (SoC), and/or another type of device or portion thereof. Device 1600 may also be associated with a user (e.g., an individual) and/or an entity that operates the device such that a device describes logical devices that include users, software, firmware, and/or a combination of devices.
Device 1600 includes communication devices 1604 that allow wired and/or wireless communication of device data 1602 (e.g., received data, data that is being received, data scheduled for broadcast, data packets of the data, etc.). The device data 1604 or other device content can include configuration settings of the device, media content (e.g., media programs 210) stored on the device, and/or information associated with a user of the device. Media content stored on device 1600 can include any type of audio, video, and/or image data. Device 1600 includes one or more data inputs 1606 via which any type of data, media content, and/or inputs can be received, such as human utterances, user-selectable inputs, messages, music, television media content, media responses, recorded video content, and any other type of audio, video, and/or image data received from any content and/or data source.
Device 1600 also includes communication interfaces 1608, which can be implemented as any one or more of a serial and/or parallel interface, a wireless interface, any type of network interface, a modem, and as any other type of communication interface. Communication interface(s) 1608 provide a connection and/or communication link(s) between device 1600 and a communication network by which other electronic, computing, and communication devices communicate data with device 1600.
Device 1600 includes one or more processors 1610 (e.g., any of microprocessors, controllers, and the like) that process various computer-executable instructions to control the operation of device 1600 and to implement techniques for determining future portions of a currently presented media program and other methods described herein. Additionally or alternatively, device 1600 may be implemented with any one or combination of hardware, firmware, or fixed logic circuitry that is implemented in connection with processing and control circuits which are generally identified at 1612. Although not shown, device 1600 can include a system bus or data transfer system that couples the various components within the device. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures.
Device 1600 also includes computer-readable storage media 1214, such as one or more memory devices, examples of which include Random Access Memory (RAM), non-volatile memory (e.g., any one or more of a read-only memory (ROM), flash memory, EPROM, EEPROM, etc.), and a disk storage device, that enable persistent and/or non-transitory data storage (i.e., as opposed to mere signal transmission). A disk storage device may be implemented as any type of magnetic or optical storage device, such as a hard disk drive, a recordable and/or rewriteable Compact Disc (CD), any type of a Digital Versatile Disc (DVD), and the like. The device 1600 may also include a mass storage device 1616.
Computer-readable storage media 1614 provides data storage mechanisms to store the device data 1604, as well as various device applications 1618 and any other types of information and/or data related to operational aspects of device 1600. For example, an operating system 1620 can be maintained as a computer application with the computer-readable storage media 1614 and executed on processors 1610. The device applications 1618 may include a device manager, such as any form of a control application, software application, signal processing and control module, code that is native to a particular device, a hardware abstraction layer for a particular device, and so forth.
The device applications 1618 also include any system components, engines, or modules to implement techniques for determining future portions of a currently presented media program. In this example, the device applications 1618 may include the status module 106, the interest module 108, the interface module 110, the history module 216, the advertisement module 220, and/or the portion module 224.
Claims (9)
1. A computer-implemented method for presenting media programs, comprising:
receiving, during presentation of a media program, a media reaction to a portion of the media program, the media reaction determined based on sensor data passively sensed during the presentation;
determining a future portion of the media program based on the media reaction and the portion of the media program and a reaction history or a portion of a reaction history, the future portion of the media program occurring later in the media program than the portion of the media program, the reaction history based on one or more contexts in which media reactions to media programs are received, wherein determining the future portion of the media includes selecting which of a plurality of advertisements to display and when to show the selected advertisement; and
causing the future portion of the media program to be presented during the presentation of the media program.
2. The computer-implemented method of claim 1, wherein determining the future portion is further based on: demographic information; or information about the media program.
3. The computer-implemented method of claim 1, further comprising: receiving another media reaction determined based on other sensor data sensed during the portion of the media program and associated with another user different from a user associated with the media reaction, the other user being physically local to the user, and wherein the future portion is further based on the other media reaction.
4. The computer-implemented method of claim 1, wherein the media reaction is a media reaction for a scene of the media program and the future portion is another scene of the media program.
5. A computer-implemented method for presenting media programs, comprising:
presenting or causing presentation of an explicit request for a requested media reaction during a media program, the explicit request being part of the media program and indicating a response to the requested media reaction, the explicit request being made by a presenter, moderator, or character as part of or within the media program;
receiving the requested media reaction, the requested media reaction determined based on sensor data passively sensed during the presentation and commensurate with the explicitly requested presentation; and
in response to receiving the requested media reaction, performing the response.
6. The computer-implemented method of claim 5, wherein the requested media reaction represents a physical change to the user and includes a hand swipe, cheering, smiling, frown, laugh, scream or clapping.
7. The computer-implemented method of claim 5, wherein the response is an offer for a product or service and executing the response provides the offer.
8. The computer-implemented method of claim 5, further comprising:
presenting a second explicit request for a second requested media reaction, the second explicit request indicating that the response is to be provided to another user associated with the user making the media reaction; and
causing the response to be provided to the other user in response to receiving the second requested media reaction.
9. The computer-implemented method of claim 5, further comprising, prior to executing the response, determining that other requested media reactions of other presentations of the media program are also received and wherein executing the response is based at least in part on the other requested media reactions.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CA2,775,700 | 2012-05-04 | ||
| US13/482,867 | 2012-05-29 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1189079A true HK1189079A (en) | 2014-05-23 |
| HK1189079B HK1189079B (en) | 2018-06-01 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102068376B1 (en) | Determining a future portion of a currently presented media program | |
| CA2775814C (en) | Advertisement presentation based on a current media reaction | |
| US8635637B2 (en) | User interface presenting an animated avatar performing a media reaction | |
| US20130268955A1 (en) | Highlighting or augmenting a media program | |
| TWI581128B (en) | Method, system, and computer-readable storage memory for controlling a media program based on a media reaction | |
| US20140337868A1 (en) | Audience-aware advertising | |
| CN105339969A (en) | Linked advertisements | |
| US20140325540A1 (en) | Media synchronized advertising overlay | |
| US20140331242A1 (en) | Management of user media impressions | |
| CN103383597B (en) | Method for media program to be presented | |
| HK1189079A (en) | Method for presenting a media program | |
| HK1189079B (en) | Method for presenting a media program | |
| HK1189084A (en) | Advertisement presentation based on a current media reaction | |
| HK1185427A (en) | Highlighting or augmenting a media program | |
| HK1186325B (en) | Controlling a media program based on a media reaction | |
| HK1186325A (en) | Controlling a media program based on a media reaction | |
| HK1183732A (en) | User interface presenting a media reaction |