[go: up one dir, main page]

HK1180864A - Methods and apparatus to monitor a multimedia presentation including multiple content windows - Google Patents

Methods and apparatus to monitor a multimedia presentation including multiple content windows Download PDF

Info

Publication number
HK1180864A
HK1180864A HK13108125.6A HK13108125A HK1180864A HK 1180864 A HK1180864 A HK 1180864A HK 13108125 A HK13108125 A HK 13108125A HK 1180864 A HK1180864 A HK 1180864A
Authority
HK
Hong Kong
Prior art keywords
media
window
region
highlighted
content
Prior art date
Application number
HK13108125.6A
Other languages
Chinese (zh)
Inventor
张敏
斯科特.库珀
道格.特恩博
Original Assignee
尼尔森(美国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 尼尔森(美国)有限公司 filed Critical 尼尔森(美国)有限公司
Publication of HK1180864A publication Critical patent/HK1180864A/en

Links

Description

Method and apparatus for monitoring a multimedia presentation comprising a plurality of content windows
The present invention is a divisional application of the original application having an application date of 31/2010, an application number of 201010242090.7, and an invention name of "method and apparatus for monitoring a multimedia presentation including a plurality of content windows".
Technical Field
The present disclosure relates generally to monitoring of multimedia presentations (multimedia presentations), and more particularly, to a method and apparatus for monitoring multimedia presentations including a plurality of content windows.
Background
Many broadband cable and satellite service providers now provide interactive solutions to allow viewers to interact with multiple content windows included in a single multimedia content presentation when viewed. In an exemplary interactive Television (TV) solution, each content window may be individually configured to present a viewer navigation menu of content or other interactive services provided by a particular broadcast network. For example, DISHA user of a Direct Broadcast Satellite (DBS) service provided by (dis network) may select a home portal channel of the dis that enables navigation between each of six video content windows contained in a multi-window video display mosaic (mosaic). The window navigation is done using the directional arrows of the remote control. As the viewer navigates between different windows of a multi-window video display presenting network media content, the audio presentation changes corresponding to the particular window that has been highlighted by the viewer. When a subsequent selection is made to view a highlighted window presenting network media content, full screen video and audio corresponding to the selected network media content is presented. However, when the viewer navigates to and highlights a window presenting some other audio-free interactive application, the audio stays on the last highlighted window presenting the network media content. The DISH home portal channel may be accessed in a variety of ways, including but not limited to: tuning to a predetermined channel (e.g., channel 100); pressing an interactive TV button on a remote controller; logging in the family portal channel from an Electronic Program Guide (EPG) and selecting a trigger advertisement on another channel.
Drawings
FIG. 1 is a block diagram of an exemplary media content monitoring system capable of implementing the media content monitoring techniques described herein;
FIG. 2 is a block diagram of an exemplary monitoring unit capable of monitoring a multimedia presentation comprising a plurality of content windows; and which may be used to implement the exemplary media content monitoring system of fig. 1;
FIG. 3 depicts a first exemplary set of predetermined regions of interest that may be used by the exemplary monitoring unit of FIG. 2 to determine whether a content window contained in a monitored multimedia presentation is highlighted;
FIG. 4 depicts a second exemplary set of predetermined regions of interest that may be used by the exemplary monitoring unit of FIG. 2 to determine whether a content window contained in a monitored multimedia presentation is highlighted;
FIG. 5 depicts an example set of templates that may be used by the example monitoring unit of FIG. 2 to determine whether a content window contained in a monitored multimedia presentation is highlighted;
FIG. 6 is a block diagram of an exemplary highlight window detector that may be used to implement the exemplary monitoring unit of FIG. 2;
FIG. 7 is a flowchart representative of example machine readable instructions that may be executed to implement an example configuration process to implement the example monitoring unit of FIG. 2;
FIG. 8 is a representative flow diagram representative of exemplary machine readable instructions that may be executed to implement an exemplary monitoring process to implement the exemplary monitoring unit of FIG. 2;
9A-9B collectively form a flow diagram of example machine readable instructions that may be executed to implement an example highlight window detection process to implement the example machine readable instructions of FIG. 8 and/or the example monitoring unit of FIG. 2;
fig. 10 is a block diagram of an example computer system that may store and/or execute the example machine readable instructions of fig. 7, 8, and/or 9A-9B to implement the example monitoring unit of fig. 2.
Detailed Description
Methods and apparatus to monitor a multimedia presentation comprising a plurality of content windows are disclosed. In an embodiment disclosed herein, a method of monitoring an exemplary media device providing a media presentation that can include a plurality of content windows includes obtaining a monitoring image corresponding to the media presentation provided by the media device. The exemplary method also includes determining a first parameter value representing at least one of a luminance or a chrominance of a first region in the monitored image, wherein a shape of the first region represents at least one content window of the plurality of content windows. Further, the exemplary method includes comparing the first parameter value to a highlight threshold to determine whether a first content window associated with a location of a first region in the monitored image is highlighted.
In another embodiment disclosed herein, a machine-readable article stores machine-readable instructions that, when executed, cause a machine to obtain a monitoring image corresponding to a media presentation provided by a media device, wherein the media presentation can include a plurality of content windows. Execution of the stored example machine readable instructions further causes the example machine to determine a first parameter value representing at least one of luminance or chrominance of a first region in the monitored image, wherein a shape of the first region represents at least one of the plurality of content windows. Further, execution of the stored example machine readable instructions causes the example machine to compare the first parameter value to a highlight threshold to determine whether a first content window associated with a location of a first region in the monitored image is highlighted.
In yet another embodiment disclosed herein, a media device monitoring unit includes an exemplary video interface communicatively coupled with at least one of a camera or a video output of an exemplary media device to obtain a monitoring image corresponding to a media presentation provided by the media device, wherein the media device is capable of including a plurality of content windows in the media presentation. The example media device monitoring unit also includes an example highlight window detector communicatively coupled with the example video interface and operable to determine a first parameter value representing at least one of luminance or chrominance of a first region in the monitored image, wherein a shape of the first region represents at least one of the plurality of content windows. Additionally, the example highlight window detector is operable to compare the first parameter value to a highlight threshold to determine whether a first content window associated with a location of the first region in the monitored image is highlighted. Further, the example media device monitoring unit includes an example configuration interface to specify at least one template corresponding to a shape of at least one of the plurality of content windows, or to specify a plurality of regions of interest corresponding to the plurality of content windows, respectively. In an exemplary implementation, the highlight window detector is operable to determine a first region in the monitored image using at least one template or a plurality of regions of interest specified by the exemplary configuration interface.
Many existing media content monitoring techniques implement content monitoring by processing video and/or audio signals that are assumed to represent only a single media content presentation. In contrast, the example media content monitoring techniques described herein determine viewer interaction with a multimedia presentation that includes multiple content windows by detecting which content window (e.g., which content presentation or display area, for example) of the multi-window multimedia presentation is currently highlighted by the viewer. Typically, the viewer can highlight a particular content window by navigating to the window or making an initial selection of the window (or, for example, a label or other identifier associated with the window), such that the selected content window (or label, etc.) is highlighted relative to other content windows (or labels, etc.) in the multi-window multimedia presentation. As such, the example media content monitoring techniques described herein can passively or non-invasively process a monitored image (e.g., such as a captured video frame) corresponding to the multi-window multimedia presentation and use luminance and/or chrominance information included in the monitored image to determine which content window (e.g., such as which content presentation or display area) is highlighted by the viewer.
In another exemplary implementation of the media content monitoring techniques described herein, an exemplary monitoring unit is configured to monitor a media device capable of providing a multimedia presentation, wherein the multimedia presentation includes a plurality of selectable content windows at known locations of the multimedia presentation. At least some of the content windows can be highlighted by the viewer. In such embodiments, the monitoring unit is configured or trained using a predetermined region of interest in the multimedia presentation that corresponds to a known location of a plurality of content windows that can be highlighted by the viewer. Additionally or alternatively, the predetermined area of interest corresponds to an identification area in the multimedia presentation that marks each content window and can be highlighted by the viewer to select the corresponding content window.
After configuration using the predetermined region of interest, the exemplary monitoring unit captures and digitizes (or otherwise acquires) video frames (or, in other words, images) representing the monitored multimedia presentation. The exemplary surveillance unit then analyzes the luminance and/or chrominance information associated with each predetermined region of interest of the surveillance video frame (or surveillance image) to determine whether one or more associated content windows are highlighted by the viewer. For example, when a content window is highlighted, the area of the multimedia presentation corresponding to the highlighted window exhibits high and consistent levels of brightness and chroma over most, if not all, of the area. Thus, in this embodiment, if the luminance and/or chrominance in a particular region of interest of the surveillance video frame (or surveillance image) meets certain criteria (e.g., if the luminance of a particular predetermined region of interest is above a threshold and greater than the luminance associated with other predetermined regions of interest), then the content window associated with the particular region of interest is determined to be highlighted by the user.
In another exemplary implementation, the exemplary monitoring unit is additionally or alternatively configured to monitor a media device capable of providing a multimedia presentation that includes a plurality of selectable content windows of potentially unknown (and known) locations in the multimedia presentation. In such an embodiment, the monitoring unit is configured with a set of templates (e.g., in the form of binary images, data-specific shapes and/or sizes of image regions, data-specific pixel boundaries of image regions, etc.) that represent the shapes of all possible highlighted content windows and/or identified regions in the monitored multimedia presentation. During operation, the exemplary monitoring unit captures and processes (or acquires) monitoring video frames (or in other words monitoring images) representing a monitored multimedia presentation. The exemplary monitoring unit then compares the monitored video frames (or monitored images) to the set of templates to identify a match between the highlighted content window (if present) and one or more templates.
Various digital image processing and matching techniques (e.g., binarization, correlation analysis, etc.) may be used to compare the surveillance video frame (or surveillance image) to the set of templates. For example, digital image binarization involves converting a color or grayscale image to a binary (e.g., black and white) image, while digital image correlation analysis provides an index of correlation (similarity) between two digital images to determine whether the two images match. In certain exemplary implementations, the monitoring unit captures, digitizes, and binarizes video frames corresponding to the monitored multimedia presentation to obtain a binary monitoring image representative of the monitored multimedia presentation. The exemplary monitoring unit then moves some or all of the configured templates in both the horizontal and vertical directions over the binary monitoring image (or selects the location (e.g., pixel location) and area (e.g., pixel boundary) of the binary monitoring image to compare with the configured templates) to determine if any template is matched at a location in the binary monitoring image. If a match is found, the location of the match and the particular template that resulted in the match are used to indicate that a content window in the monitored multimedia presentation has been highlighted by the viewer.
Upon determining whether and which content window has been highlighted by the viewer, at least some of the example monitoring techniques described herein operate to identify media content presented in the highlighted content window. For example, each content window may be pre-assigned to a particular broadcast channel (or more specifically, content source). In such embodiments, identifying the highlighted content window also uniquely identifies the content source. Additionally or alternatively, Optical Character Recognition (OCR) and/or logo detection can be used to process content appearing in the highlighted content window to obtain identifying information (e.g., such as broadcast channel number, name, logo, etc.) that can be used to identify the source of the content appearing in the highlighted content window. Audio codes/signatures determined from audio signals emitted by the monitored media device, and/or video codes/signatures determined from video signals emitted by the media device, can also be used to identify content appearing in a highlighted content window of the monitored multimedia presentation.
Turning to the drawings, FIG. 1 illustrates a block diagram of an exemplary media content monitoring system 100 capable of implementing the media content monitoring techniques described herein. The exemplary media content monitoring system 100 includes an exemplary media device 105, the exemplary media device 105 configured to present an exemplary multimedia presentation 110, the exemplary multimedia presentation 110 using media content received from an exemplary service provider 115. The exemplary media content monitoring system 100 also includes an exemplary monitoring unit 120, the exemplary monitoring unit 120 being configured to monitor the exemplary multimedia presentation 110 provided by the exemplary media device 105. The exemplary media content monitoring system 100 also includes an exemplary central processing device 125, the exemplary central processing device 125 being in communication with the exemplary monitoring unit 120 via an exemplary network 130 and configured to receive monitoring data determined by the monitoring unit 120. Additionally, at least some example implementations of the media content monitoring system 100 include an example configuration terminal 135, the example configuration terminal 135 allowing local configuration of the example monitoring unit 120. Additionally or alternatively, at least some example implementations allow for remote configuration of the example monitoring unit 120 via the example central processing device 125 and the example network 130. Additionally or alternatively, in at least some example implementations, the example monitoring unit 120 can be configured to cause the example media device 105 to present one or more configuration windows, using the example monitoring unit 120 to receive input configuration data from, for example, a remote control or other input device.
Referring in more detail to fig. 1, the exemplary service provider 115 operates as an input to the exemplary media content monitoring system 100 and can be implemented as any type of service provider, such as a broadcast cable service provider, a satellite television service provider (e.g., such as the DISH network), a live satellite feed, a Radio Frequency (RF) television service provider, an internet streaming video/audio provider (e.g., such as Netflix, Inc.), a Video On Demand (VOD) provider, and so forth. Additionally or alternatively, the example service provider 115 can attach to or be replaced by one or more local media content sources, such as Digital Versatile Disk (DVD) players, Video Cassette Recorders (VCRs), video game consoles, Digital Video Recorders (DVRs), and the like. Further, the example service provider 115 can be implemented in the form of any number of service providers and/or any combination of any number of local media content sources (such as those described above).
The exemplary media device 105 may be implemented by any type of media device, such as a television, a set-top box (STB), a multimedia computer system, a multimedia-capable handset, a Personal Digital Assistant (PDA), and so forth. Further, in some example implementations, the example media device 105 may be coupled to or integrated with one or more local media content sources (such as those described above in describing the example service provider 115). More generally, the exemplary media device 105 may be implemented by any type of media device capable of processing media content to provide a multimedia presentation (e.g., the exemplary multimedia presentation 110 in the illustrated embodiment) that includes multiple content regions or windows. As such, the exemplary media device 105 corresponds to a device capable of rendering (e.g., displaying) a multi-window multimedia presentation (e.g., a television) or the exemplary media device 105 may correspond to a device, such as a STB, capable of providing the multi-window multimedia presentation to another device, such as a television, for presentation.
In the illustrated embodiment, the exemplary media device 105 is capable of providing the exemplary multimedia presentation 110, the exemplary multimedia presentation 110 including six (6) media content windows 140A-F and six (6) associated identification tags 145A-F. The exemplary media device 105 continuously provides the exemplary multi-window multimedia presentation 110. Additionally or alternatively, the example media device 105 only provides the example multi-window multimedia presentation 110 in a particular operating environment, such as when the media device 105 is tuned to a particular channel, configured to operate in a particular presentation/display mode, configured to execute a particular multimedia presentation application, and so forth. While the exemplary multimedia presentation 110 includes six (6) content windows, the exemplary monitoring techniques described herein support multi-window multimedia presentations that include any number of content windows. Further, each content window included in the multi-window multimedia presentation may be any type of display window, region, bar, menu, etc., with or without boundaries. Also, each content window in the multi-window multimedia presentation is typically associated with a different media source. For example, a first content window may have as its source a first broadcast channel provided by a service provider 115, while a second content window may have as its source menu display data provided by the service provider 115 or generated by the media device 105. In another example, the second content window may have as its source a second broadcast channel provided by the service provider 115. In yet another example, the second content window may obtain its input from a local media content source. The foregoing embodiments are merely illustrative and are not limiting.
As a first illustrative embodiment, the exemplary service provider 115 corresponds to a DISHDirect broadcast satellite services, and media devices 105 correspond to devices capable of receiving and presenting over the DISHA media device for live broadcasting media content received by a satellite service. In this embodiment, the multimedia presentation 110 presented by the exemplary media device 105 corresponds to the presentation of the DISH family portal channel. The exemplary media device 105 may access the DISH home portal channel in a variety of ways, including but not limited to: tuning to a predetermined channel (e.g., channel 100); pressing an interactive TV button on a remote control (not shown); logging in the home portal channel according to an Electronic Program Guide (EPG); selecting a trigger on another channel, etc.
As shown in the illustrated embodiment, the dis home portal channel provides six (6) video content windows 140A-140F contained in the multimedia presentation 110, thereby forming a multi-window video display mosaic for display by the exemplary media device 105. Each content window 140A-140F may be configured to be presented by a DISH networkA viewer navigation menu of content or other interactive services provided by a particular broadcast network operated by a direct broadcast satellite service. Navigation of each of the six (6) video content windows 140A-140F is accomplished using directional arrows of a remote control (not shown). As the viewer navigates to a particular video content window 140A-140F of the multimedia presentation 110, the selected video content window is highlighted. For example, the exemplary multimedia presentation 110 designates the first content window 140A to be highlighted (as is the first information tab 145A).
In the embodiment of the DISH home portal channel, when a viewer highlights a particular video content window 140A-140F and the highlighted content window presents media content, the audio signal output by the exemplary media device 105 corresponds to the media content presented in the highlighted content window. For example, if the highlighted first content window 140A of the illustrated embodiment is being presented by the DISH NetwoorkThe particular broadcast network operated to provide media content, the audio signal output by the exemplary media device 105 is changed to correspond to the network media content presented in the highlighted first content window 140A. When the highlighted content window presenting the network media content (e.g., such as the first content window 140A) is subsequently re-selected for viewing, full screen video and audio corresponding to the selected network media content is presented. However, when the viewer navigates to and highlights a content window 140A-140F that is presenting some other non-audio interactive application (e.g., such as an interactive program guide), the audio output by the exemplary media device 105 will correspond to the most recently highlighted window that is presenting the network media content. For example, if a viewer highlights a second content window used to present an interactive program guide after a first content window used to present a broadcast program having audio content has been previously highlighted, the media device 105 will continue to output audio corresponding to the broadcast program presented in the first content window even though the second content window presenting the interactive program guide is being highlighted. Thus, because the presented audio does not necessarily correspond to the highlighted content window in all cases, merely monitoring the audio output by the exemplary media device 105 may not be sufficient to accurately identify which of the content windows 140A-F included in the exemplary multimedia presentation 110 has been highlighted by the viewer.
In a second exemplary embodiment, the exemplary service provider 115 corresponds to an internet service provider and/or one or more internet streaming video/audio providers, and the media device 105 may correspond to a multimedia computer system, a multimedia-capable cell phone, a PDA, or the like. In this embodiment, the multimedia presentation 110 presented by the exemplary media device 105 may correspond to a plurality of media content windows 140A-F opened for presentation of a plurality of respective media content presentations. For example, each content window 140A-F can be opened and configured for presenting different streamed or downloaded audio/video content. In this embodiment, the viewer may use an input device, such as a mouse, pointer, etc. (not shown), to highlight the activated one of the plurality of content windows 140A-F.
As shown in the above embodiments, each of the media content windows 140A-F included in the exemplary multimedia presentation 110 provided by the exemplary media device 105 of FIG. 1 is capable of presenting any type of media content, including, but not limited to, broadband audio/video content, streaming audio/video content, audio/video content presented from downloaded files, images, menus, graphics, and the like. The identification tags 145A-F can include any type of identifying information, such as broadcast channel numbers, channel names, program names, file names, sites, logos, etc., corresponding to the media content presented in the respective media content windows 140A-F. As discussed in the embodiments above, each of the media content windows 140A-F included in the exemplary multimedia presentation 110 can be highlighted by a viewer to allow the viewer to interact with the exemplary multimedia presentation 110 provided by the exemplary media device 105. In an exemplary implementation, when a viewer selects a particular media content window, the selected content window (or at least a substantial portion thereof) is highlighted. For example, when one content window is highlighted, the multimedia presentation area corresponding to the highlighted window exhibits a substantially different (e.g., higher, lower, etc.) luminance and/or chrominance level (or other image characteristic, such as texture, shading, etc.) than the multimedia presentation area corresponding to the other content window. For example, the first media content window 140A in fig. 1 is designated as a highlight by the viewer. Additionally or alternatively, when the viewer selects a particular media content window, the identification tag (or at least a substantial portion thereof) associated with the selected content window is highlighted. For example, the first identification tag 145A associated with the first content window 140A in FIG. 1 is also designated as highlighted by the viewer.
To monitor the exemplary multimedia presentation 110 provided by the exemplary media device 105, the exemplary media content monitoring system 100 includes an exemplary monitoring unit 120. Unlike many, if not all, conventional media content monitors, the example monitoring unit 120 is capable of monitoring the example multimedia presentation 110 and determining which one or more of the plurality of content windows 140A-F has been highlighted by the viewer. Further, in at least some embodiments, the monitoring unit 120 can identify particular media content presented in particular content windows 140A-F that are determined to have been highlighted. Further, in at least some example implementations, the monitoring unit 120 can determine whether the monitored media device 105 has been tuned to a particular channel or has been purposely configured for presenting the example multimedia presentation 110 including the plurality of content windows 140A-F.
In the illustrated embodiment, the monitoring unit 120 includes a video signal input 150 for receiving a video signal output by the exemplary media device 105. Additionally or alternatively, the monitoring unit 120 of the illustrated embodiment includes a camera 155 (e.g., such as a video camera or digital camera), the camera 155 being positioned to capture images corresponding to the display of the media device 105. Although described as including the example video signal input 150 and the example camera 155, the example monitoring unit 120 can include any type of sensor capable of receiving, detecting, and/or processing the example multimedia presentation 110 provided by the example media device 105. The inclusion of the example video signal input 150 and/or the example camera 155 (and/or any other suitable sensor) in the example monitoring unit 120 allows the monitoring unit 120 to losslessly monitor the example multimedia presentation 110 provided by the example media device 105 (i.e., without modification to the example media device 105).
For example, as described in greater detail below, the monitoring unit 120 uses video signals received via the example video signal input 150 and/or image(s) taken via the example camera 155 to determine (e.g., capture) a monitoring image (or, equivalently, a monitoring video frame) corresponding to the example multimedia presentation 110 provided by the example media device 105. The example monitoring unit 120 then processes the luminance and/or chrominance information included in the monitored image to determine whether the corresponding monitored multimedia presentation 110 includes a plurality of content windows, such as the plurality of content windows 140A-F of the illustrated embodiment. If it is determined that the monitored multimedia presentation 110 includes a plurality of content windows (e.g., such as the plurality of content windows 140A-F), the example monitoring unit 120 also determines which one or more of the content windows has been highlighted by the viewer.
For example, in a scenario where a plurality of content windows 140A-F included in the monitored multimedia presentation 110 have known locations, the exemplary monitoring unit 120 can be configured or trained using predetermined regions of interest in the multimedia presentation 110 that correspond to the known locations of the plurality of content windows 140A-F. Additionally or alternatively, to support scenarios in which identification tags associated with a particular content window can be highlighted, the exemplary monitoring unit 120 can be configured or trained with predetermined regions of interest in the multimedia presentation 110 that correspond to known locations of the plurality of identification tags 145A-F. After being configured/trained by the predetermined region of interest, the exemplary monitoring unit 120 processes the luminance and/or chrominance information included at each predetermined region of interest in the monitored image to determine whether one or more relevant content windows have been highlighted. The processing of this region of interest is described in more detail below in connection with the description of the exemplary implementation of the monitoring unit 120 shown in fig. 2.
For scenes in which the plurality of content windows 140A-F included in the monitored multimedia presentation 110 may have unknown (and known) locations, the exemplary monitoring unit 120 is additionally or alternatively configured or trained with a set of templates (e.g., in the form of binary images, data-specified shapes and/or sizes of image regions, data-specified pixel boundaries of image regions, etc.) that represent the shapes of all of the content windows 140A-F and/or the identified regions 145A-F that may be highlighted in the monitored multimedia presentation 110. After being configured/trained by the template set, the exemplary monitoring unit 120 compares the monitored image to the template set using various digital image processing and matching techniques, such as binarization, correlation analysis, and the like. For example, the monitoring unit 120 moves some or all of the configured templates in both the horizontal and vertical directions over the monitored image (or selects the location (e.g., pixel location) and area (e.g., pixel boundary) of the binary monitored image to compare with the configured templates) to determine whether the correlation of any template with the monitored image matches a certain location in the monitored image. If a match is found, the location of the match and the particular template that reached the match are used to indicate that the relevant content window 140A-F (and/or identification tag 145A-F) in the monitored multimedia presentation 110 has been highlighted. Such template processing is described in more detail below in connection with the description of the implementation of the monitoring unit 12 in FIG. 2.
In some exemplary implementations, the monitoring unit 12 may make an initial decision as to whether the monitored multimedia presentation 110 includes a plurality of content windows (e.g., the plurality of content windows 140A-F of the illustrated embodiment) before attempting to detect any of the highlighted content windows (e.g., using the above-described region of interest or template process). For example, to reduce the processing requirements of the monitoring unit 120, such highlight window detection is only performed if the monitored image corresponding to the multimedia presentation 110 indicates that the multimedia presentation 110 includes multiple content windows. Any suitable image processing technique, such as line feature detection or edge detection techniques based on processing, for example, the luminance and/or chrominance values of the monitored image, can be used to determine whether multiple content windows are included in the monitored multimedia presentation 110.
Assuming that the exemplary monitoring unit 120 determines that a content window (e.g., the exemplary content window 140A) included in the monitored multimedia presentation 110 has been highlighted, the monitoring unit 120 of the described embodiment is further operative to identify the media content presented in the highlighted content window 140A. For example, if each content window 140A-F is assigned a particular broadcast channel or, more generally, a content source provided by the example service provider 115, the example monitoring unit 120 can identify the media content presented in the highlighted content window 140A by the location of the highlighted content window 140A in the monitored multimedia presentation 110.
Additionally or alternatively, the example monitoring unit 120 can employ Optical Character Recognition (OCR) and/or logo detection to process the highlighted content window 140A to obtain identification information (e.g., such as broadcast channel number, name, logo, etc.) that can be used to identify media content appearing in the highlighted content window 140A. Additionally or alternatively, the example monitoring unit 120 can employ video/image data acquired from the example signal input 150 and/or the example camera 155 to determine video codes/signatures to identify media content appearing in the highlighted content window 140A.
Further, the example monitoring unit 120 includes a microphone 160, the microphone 160 for capturing sound emanating from the example media device 105. Although not shown, the example monitoring unit 120 may also include an audio line input in addition to or as an alternative to the example microphone 160. The exemplary monitoring unit 120 of the illustrated embodiment uses a microphone 160 to capture monitoring audio signals from the exemplary media device 105 and corresponding to the exemplary monitored multimedia presentation 110. The exemplary monitoring unit 120 then determines an audio code/signature from the monitored audio signals using any suitable technique to identify the media content appearing in the highlighted content window 140A. For example, as described above, as the viewer navigates between different content windows 140A-F of at least some of the exemplary multimedia presentations 110 (e.g., presentations corresponding to the DISH home portal channel), the sound emitted from the media device 105 changes to correspond to the particular content window that has been highlighted by the viewer. In this embodiment, the audio codes/signatures determined from the monitored audio signals correspond to media content presented in the highlighted content window (e.g., content window 140A) and, thus, can be used to identify such media content.
The example monitoring unit 120 stores and reports monitoring data to the example central processing device 125 at predetermined time intervals and/or upon the occurrence of a particular event (e.g., such as when the central processing device 125 issues a prompt to the monitoring unit 120 to report any available monitoring data). The monitoring data includes, for example, detection of an exemplary multimedia presentation 110 that includes a plurality of content windows (e.g., such as windows 140A-F), a determination of which such content windows have been highlighted by the viewer, identification information of media content presented in such highlighted content windows, and so forth. In this embodiment, the example monitoring unit 120 communicates with the example central processing device 125 over the example network 130. The exemplary network 130 may be implemented by any type of communications network such as, for example, a corporate/enterprise Local Area Network (LAN), a broadband cable network, a broadband satellite network, a broadband cellular network, an Internet Service Provider (ISP) that provides access to the internet, dial-up connections, and the like.
The central processing facility 125 of the illustrated embodiment processes the monitoring data received from the exemplary monitoring facility 120 to determine ratings information, content verification information, targeted advertising information, and the like. Additionally, in at least some example implementations, the central processing device 125 remotely configures the monitoring unit 120, for example, to configure a predetermined region of interest and/or set of templates for implementing brightness window detection. The exemplary monitoring unit 120 of the illustrated embodiment also supports local configuration, e.g., configuration of a predetermined region of interest and/or a set of templates for brightness window detection by the exemplary configuration terminal 135. The exemplary configuration terminal 135 may be implemented using any type of terminal or computer, such as a computer workstation, PDA, handheld terminal, cell phone, etc.
Fig. 2 shows a block diagram of an exemplary implementation of the monitoring unit 120 of fig. 1. The example monitoring unit 120 of fig. 2 includes an example video interface 205, the example video interface 205 implemented using any video interface technology capable of communicative coupling with the example video input 150 (and thus the video output of a monitored media device) and/or implemented by the example camera 155 included in the monitoring unit 120 for determining (e.g., capturing) a monitored image corresponding to a multimedia presentation provided by the monitored media device (e.g., the example media device 105 of fig. 1). The example monitoring device 120 of FIG. 2 also includes an example audio interface 210, the example audio interface 210 being implemented using any audio interface technology capable of acquiring monitored audio signals from an example audio line input (not shown) and/or the example microphone 160 included in the monitoring unit 120. The example monitoring unit 120 of FIG. 2 also includes an example network interface 215, the example network interface 215 being implemented using any network technology capable of interfacing with the example network 130 for supporting communication with a central processing device (e.g., the example central processing device 125 of FIG. 1). The example network interface 215 also supports an interface with the example configuration terminal 135 of fig. 1.
Additionally, the example area of interest configuration unit 225 includes a monitoring data storage unit 218, the monitoring data storage unit 218 to store monitoring image data obtained via the example video interface 205, monitoring audio data obtained via the example audio interface 210, and/or any combination thereof. The example monitoring data storage unit 218 may be implemented using any type of memory or storage technology. For example, the monitoring data store 218 may be implemented as volatile memory 1018 and/or mass storage device 1030 included in the exemplary system 1000 shown in FIG. 10, described in more detail below.
To determine whether a monitored multimedia presentation (e.g., the exemplary multimedia presentation 110 of FIG. 1) includes a plurality of content windows (e.g., the exemplary content windows 140A-F), and to determine whether any such content windows are highlighted by a viewer, the exemplary monitoring unit 120 of FIG. 2 includes a highlight window detector 220, a region of interest (ROI) configuration unit 225, and a template configuration unit 230. The exemplary region of interest configuration unit 225 allows for the designation of a predetermined group of regions of interest as corresponding to one or more content windows having known locations in a multimedia presentation monitored by the exemplary monitoring unit 120. In the illustrated embodiment, the region of interest configuration unit 225 receives ROI configuration information obtained from the example central processing device 125 and/or the example configuration terminal 135 of fig. 1 via the network interface 215. Embodiments of the ROI configuration information received by the exemplary region of interest configuration unit 225 include: the number of content windows included in the monitored multimedia presentation, the location of each content window, the size of each content window (e.g., if the content windows have predetermined shapes of different sizes), the shape of each content window (e.g., the shape and size selected from the group of possible shapes having different sizes, the points or vertices designated as particular content windows defining particular locations, etc.), and the like.
The exemplary region of interest configuring unit 225 stores a predetermined region of interest group specified using the acquired ROI configuration information in the exemplary configuration data storage unit 235. The example configuration data storage unit 235 may be implemented using any type of memory or storage technology. For example, the configuration data storage unit 235 may be implemented by a non-volatile memory 1020 included in the exemplary system 1000 shown in fig. 10, which will be described in more detail later. Additionally or alternatively, the configuration data store 235 may be implemented by a volatile memory 1018 and/or a mass storage device 1030 included in the exemplary system 1000 shown in FIG. 10, which will be described in greater detail below. As explained in more detail below, the example highlight window detector 220 is capable of retrieving a predetermined set of regions of interest stored in the example configuration data storage unit 235 to perform detection of a highlighted content window for a scene in which the content window included in the monitored multimedia presentation has a known location.
Fig. 3 shows a first exemplary set 300 of predetermined regions of interest that may be specified and stored by the exemplary region of interest configuration unit 225. The first exemplary set 300 of predetermined regions of interest corresponds to the exemplary multimedia presentation 110 of FIG. 1. More specifically, the first exemplary set 300 of predetermined regions of interest represents regions of the exemplary multimedia presentation 110 that are associated with the exemplary content windows 140A-F. Accordingly, the first exemplary set 300 of predetermined regions of interest includes six (6) predetermined regions of interest 305A-F. The six (6) predetermined regions of interest 305A-F correspond to the six (6) exemplary content windows 140A-F of the exemplary multimedia presentation 110, respectively. The six (6) predetermined regions of interest 305A-F may be specified by the exemplary region of interest configuration unit 225 using any combination of position, size, and/or shape information, or by any other type of configuration information or the like that is capable of specifying multiple regions in an image, video frame, or the like.
A second exemplary set 400 of predetermined regions of interest that may be specified and stored by the exemplary region of interest configuration unit 225 is shown in fig. 4. The second exemplary set 400 of predetermined regions of interest also corresponds to the exemplary multimedia presentation 110 of FIG. 1. More specifically, the second exemplary set 400 of predetermined areas of interest represents areas of the exemplary multimedia presentation 110 associated with the exemplary identification tags 145A-F. Accordingly, the second exemplary set 400 of predetermined regions of interest includes six (6) predetermined regions of interest 405A-F. The six (6) predetermined regions of interest 405A-F correspond to the six (6) exemplary identification tags 145A-F, respectively, included in the exemplary multimedia presentation 110. Similar to the embodiment of FIG. 3, the six (6) predetermined regions of interest 405A-F of FIG. 4 may be specified by the exemplary region of interest configuration unit 225 using any combination of position, size, and/or shape information, or the like, or by any other type of configuration information or the like that is capable of specifying multiple regions in an image, video frame, or the like.
Predetermined regions of interest groups other than the embodiments of fig. 3 and 4 may also be configured by the exemplary region of interest configuration unit 225.
Returning to fig. 2, the example template configuration unit 230 allows for the specification of a set of templates for finding one or more content windows with potentially unknown (and known) locations in a multimedia presentation monitored by the example monitoring unit 120. In the illustrated embodiment, the template configuration unit 230 receives template configuration information obtained from the example central processing device 125 and/or the example configuration terminal 135 of FIG. 1 via the network interface 215. Examples of template configuration information obtained by the exemplary template configuration unit 230 include the number of templates used to search the monitored multimedia presentation, the size of each template (e.g., if the templates have predetermined shapes of different sizes), the shape of each template (e.g., specified as points or vertices defining the template, a selection among a set of possible templates, etc.), and the like.
The exemplary template configuration unit 230 stores a template group specified with the acquired template configuration information in the exemplary configuration data storage unit 235. As described in greater detail below, the example highlight window detector 220 is capable of retrieving the set of templates stored in the example configuration data storage unit 235 to perform detection of highlighted content windows for scenarios in which the content windows included in the monitored multimedia presentation have potentially unknown (and known) locations.
Fig. 5 illustrates an exemplary template set 500 that may be specified and stored by the exemplary template configuration unit 230. The exemplary set of templates 500 includes templates 505 and 510 that can be used individually or collectively to perform the detection of highlighted windows associated with the exemplary multimedia presentation 110 of FIG. 1. For example, the template 505 has a specified shape and size corresponding to the exemplary content windows 140A-F included in the exemplary multimedia presentation 110, and can be used to detect whether one or more of the exemplary content windows 140A-F are highlighted, as described in more detail below. The exemplary template 510 has a specified shape and size corresponding to the exemplary identification tags 145A-F included in the exemplary multimedia presentation 110, and can be used to detect whether one or more of the exemplary identification tags 145A-F are highlighted, as described in more detail below. Fig. 5 illustrates other exemplary templates 515, 520, and 525 having shapes and sizes corresponding to other possible content windows, identification tags, menus, icons, etc. that can be included in the monitored multimedia presentation. The exemplary template configuration unit 230 is also capable of configuring templates having shapes and sizes different from the embodiment illustrated in fig. 5.
Returning to fig. 2, the example highlight window detector 220 included in the example monitoring unit 120 is communicatively coupled to the example video interface 205 (e.g., via the example monitoring data store 218) to obtain a monitoring image(s) determined (e.g., captured) by the video interface 205 and corresponding to a monitored multimedia presentation including a plurality of content windows, such as the example multimedia presentation 110 provided by the example media device 105 of fig. 1. For example, the video interface 205 may store the surveillance images in the example surveillance data store 218 for retrieval by the example highlight window detector 220. The highlighted window detector 220 processes the regions of the monitored image to determine respective parameter values for each of the processed regions, each parameter value indicating whether a content window is included in the monitored multimedia presentation and, if so, whether the content window is highlighted. As described below, the highlight window detector 220 selects regions to be processed using a set of regions of interest and/or a set of templates stored in the exemplary configuration data store 235. In this way, each processed region of the monitored image has a shape and possibly a position corresponding to a respective content window included in the monitored multimedia presentation.
In many multimedia presentations that include multiple content windows, when a content window is highlighted, at least a portion of a region of the multimedia presentation corresponding to the highlighted window (e.g., such as substantially all of the highlighted window, or simply a border around or along a segment of the highlighted window, etc.) presents a generally consistent level of brightness and/or chromaticity in that region (or portion of that region), and the level of brightness and/or chromaticity is significantly higher or lower than the level of brightness and/or chromaticity associated with the remainder of the multimedia presentation, including regions associated with other content windows of the multimedia presentation. Thus, in the illustrated embodiment, the highlight window detector 220 determines a parameter value for each processed region that is indicative of the luminance and/or chrominance associated with the processed region. The example highlight window detector 220 then compares the determined luminance and/or chrominance parameter values for the processed region to a highlight threshold to determine whether the processed region corresponds to a highlighted content window. In an exemplary implementation, the highlight threshold may be a fixed threshold that is determined to represent a luminance and/or chrominance parameter that is reached or exceeded by a content window that will be reliably detected as being highlighted. In another exemplary implementation, the highlight threshold may be a varying (e.g., adaptive) threshold that is determined based on the overall luminance and/or chrominance associated with the entire monitored image, the average luminance and/or chrominance associated with some or all of the regions of the possible content windows in the monitored image, a combination of these luminance and/or chrominance values, and so forth. In such an exemplary implementation, a processed region may be determined to correspond to a highlighted content window if its luminance and/or chrominance parameters exceed or deviate from the varying (e.g., adaptive) highlighting threshold by a specified amount, factor, etc.
An exemplary implementation of the highlight window detector 220 of FIG. 2 is shown in FIG. 6. Turning to fig. 6, the highlight window detector 220 of the illustrated embodiment includes a region selector 605, a luminance comparator 610, and a chrominance comparator 615 to detect a highlighted content window using a predetermined region of interest group (e.g., the predetermined region of interest groups 300 and/or 400 described above in connection with fig. 3 and 4). The exemplary highlight window detector 220 of FIG. 6 also includes a binary image converter 620 and a template correlator 625 to employ a template set (e.g., the template set 500 described above in connection with FIG. 5) to detect the highlighted content window. The example highlight window detector 220 of FIG. 6 also includes a decision unit 630, the decision unit 630 for detecting a highlighted content window included in the monitored multimedia presentation based on information determined by the example luma comparator 610, the example chroma comparator 615, and/or the example template correlator 625.
To support detection of a highlighted content window based on a predetermined region of interest, region selector 605 selects a predetermined region of a surveillance image, e.g., obtained (e.g., captured) by video interface 205 of FIG. 2, according to a predetermined region of interest group as configured by region of interest configuration unit 225 of FIG. 2. For example, the region selector 605 may select a predetermined region of interest of the monitored image, one of 305A-F of FIG. 3 or 405A-F of FIG. 4. The example region selector 605 then provides the selected region of the monitored image to the example luminance comparator 610 and/or the example chrominance comparator 615 for subsequent processing to determine respective luminance and/or chrominance parameter values for the selected region. Based on the particular embodiment, the region selector 605 continues to select a predetermined region of interest from the configured set of predetermined regions of interest until, for example, all regions of interest have been processed, a specified subset of the regions of interest have been processed, a highlighted content window has been detected, and so forth.
The example brightness comparator 610 included in the example highlight window detector 220 of FIG. 6 processes a selected region of the monitored image provided by the example region selector 605 to determine a respective brightness parameter value for the selected region. The brightness comparator 610 of the illustrated embodiment may be implemented using any technique for determining the brightness associated with monitoring a selected region of an image. For example, the brightness comparator 610 may determine the brightness associated with the selected region of the monitored image by processing the pixels included in the selected region, determining the brightness of the pixels and then summing, averaging, integrating, etc., or otherwise combining the brightness of the pixels to determine a brightness parameter value for the selected region. In some exemplary implementations, instead of processing all pixels in the selected region of the monitoring image, the brightness comparator 610 may be configured to process a subset of the pixels of the selected region to, for example, exclude outliers, reduce processing requirements, and the like.
The example brightness comparator 610 also compares the brightness parameter value determined for the selected region of the monitored image to a brightness-based highlighting threshold to determine whether the respective content window associated with the selected region location in the monitored image is highlighted. In the illustrated embodiment, the brightness-based highlighting threshold used by the exemplary brightness comparator 610 corresponds to a brightness associated with a human-perceptible highlighting of at least one of the plurality of content windows included in the monitored multimedia presentation. In an exemplary implementation, the brightness comparator 610 is configured with a fixed brightness-based highlighting threshold that is determined to represent a brightness parameter that a selected region corresponding to a respective content window must meet or exceed in order for the content window to be reliably detected as a highlighted content window. In another exemplary implementation, the brightness-based threshold is a varying (e.g., adaptive) threshold that depends on the overall brightness associated with the entire monitored image, the average brightness associated with some or all of the regions corresponding to possible content windows in the monitored image, a combination of these brightness values, etc., as determined by the exemplary brightness comparator 610.
An exemplary chrominance comparator 615 included in the exemplary highlight window detector 220 of FIG. 6 processes a selected region of the monitored image provided by the exemplary region selector 605 to determine respective chrominance parameter values for the selected region. Chroma comparator 615 of the illustrated embodiment may be implemented using any technique for determining chroma associated with monitoring a selected region of an image. For example, the chroma comparator 615 may determine the chroma associated with a selected region of the surveillance image by processing pixels included in the selected region, determining the chroma of the pixels, and then summing, averaging, integrating, etc., or otherwise combining the chroma of the pixels to determine a chroma parameter value for the selected region. In some exemplary implementations, instead of processing all pixels in a selected region of the surveillance image, the chrominance comparator 615 may be configured to process a subset of the pixels of the selected region to, for example, exclude outliers, reduce processing requirements, and the like.
The example chroma comparator 615 also compares a chroma parameter value determined for the selected region of the monitored image to a chroma-based highlighting threshold to determine whether each content window associated with the selected region location in the monitored image is highlighted. In the illustrated embodiment, the chroma-based highlighting threshold used by the exemplary chroma comparator 615 corresponds to a chroma associated with a human-perceptible highlighting of at least one of the plurality of content windows included in the monitored multimedia presentation. In an exemplary implementation, the chroma comparator 615 is configured with a fixed chroma-based highlighting threshold that is determined to represent a chroma parameter that a selected region corresponding to a respective content window must meet or exceed to reliably detect the content window as being a highlighted content window. In another exemplary implementation, the chroma-based highlighting threshold is a varying (e.g., adaptive) threshold that depends on the overall chroma determined by the exemplary chroma comparator 615 in relation to the entire monitored image, the average chroma in relation to some or all of the regions in the monitored image that correspond to possible windows of content, a combination of the chroma values, and/or the like.
An exemplary decision unit 630 included in the exemplary highlight window detector 220 of fig. 6 processes the comparison results determined by the exemplary luma comparator 610 and/or the exemplary chroma comparator 615 to determine whether the respective content window associated with the location of the selected area in the monitored image is highlighted. In an exemplary implementation, the decision unit 630 is configured to determine that the location of the selected region of the monitored image corresponds to a highlighted content window in the monitored multi-window multimedia presentation if the comparison results determined by the exemplary luma comparator 610 and/or the exemplary chroma comparator 615 indicate that the determined luma and/or chroma parameter values are greater than or equal to the associated highlight threshold values. In another exemplary implementation, the determination unit 630 is configured to determine that the location of the selected region of the monitored image corresponds to a highlighted content window in the monitored multi-window multimedia presentation if the comparison results determined by the exemplary luminance comparator 610 and/or the exemplary chrominance comparator 615 indicate that the determined luminance and/or chrominance parameter values deviate from the associated highlighting threshold by a specified amount, factor, or the like. Based on this particular exemplary implementation, the decision unit 630 is configured to determine that the location of the selected region of the monitored image corresponds to the highlighted content window in the monitored multi-window multimedia presentation if any of the determined values of the luminance and/or chrominance parameters are greater than or equal to, or deviate from, the associated threshold values, or the decision unit 630 may be configured to require that both the luminance and chrominance parameters must be greater than or equal to, or deviate from, their associated threshold values before the decision unit 630 determines that the highlighted content window is located in the selected region of the monitored image.
To support template-based detection of highlighted content windows, the example binary image converter 620 of FIG. 6 converts a monitored image, such as obtained (e.g., captured) by the video interface 205 of FIG. 2, into a binary monitored image corresponding to a monitored multimedia presentation. In the illustrated embodiment, the binary image converter 620 uses a binary conversion threshold to convert the grayscale or color pixels of the monitor image to corresponding binary pixels of the binary monitor image. Each binary pixel has one of two possible values (e.g., black or white). In an exemplary implementation, the binary transition threshold is configured and/or determined to correspond to pixel brightness values associated with human-perceptible highlighting of a content window included in a multi-window multimedia presentation. In another exemplary implementation, the binary transition threshold is configured and/or determined to correspond to a pixel chrominance value associated with a human-perceptible highlighting of a content window included in a multi-window multimedia presentation. In yet another exemplary implementation, the binary transition threshold is configured and/or determined to correspond to a combined pixel luminance chroma value associated with human-perceptible highlighting of a window of content included in a multi-window multimedia presentation. Such binary transition thresholds may be fixed or variable and are based on configuration or training of the exemplary binary image converter 620 using images known to include highlighted content windows and other images known to not include highlighted content windows to perform appropriate binary image conversions for the particular multimedia presentation and/or media device being monitored.
The exemplary template correlator 625 included in the exemplary highlight window detector 220 of FIG. 6 processes the binary monitor image determined by the exemplary binary image converter 620 using a template selected from the set of templates configured, for example, by the template configuration unit 230 of FIG. 2. For example, the template correlator 625 may select one of the templates 505A-F of FIG. 5 for processing the binary monitor image. Based on the particular exemplary implementation, the template correlator 625 continues to select templates from the set of configured templates for processing the binary monitor image until, for example, all templates have been processed, a specified subset of templates have been processed, a highlighted content window has been detected, and so forth.
After selecting a particular template, the example template correlator 625 begins to move the selected template over a combination of horizontally and vertically displaced positions that cover a portion or all of the binary monitor image. For each horizontal and vertical shift position, the example template correlator 625 selects the respective region of the binary monitor image that is located at that particular horizontal and vertical shift position and is defined by the template. The exemplary template correlator 625 then correlates the selected region of the binary monitor image with the template to determine a correlation parameter value associated with the selected region of the binary monitor image. Because the binary image converter 620 uses a binary conversion threshold based on pixel luminance values, pixel chrominance values, or both, the correlation parameter values for the selected region of the binary monitor image determined by the example template correlator 625 represent the number of pixels in the selected region of the monitor image that have luminance and/or chrominance values greater than or equal to the binary conversion threshold.
An exemplary decision unit 630 included in the exemplary highlight window detector 220 of FIG. 6 then processes the correlation parameter values determined by the exemplary template correlator 625 to determine whether a content window is highlighted and is located in a horizontally shifted position and a vertically shifted position in the monitored multimedia presentation relative to the selected region of the binary monitor image. In an exemplary implementation, the decision element 630 is configured to determine that the selected region of the monitored image at a particular horizontal and vertical movement position corresponds to a highlighted content window in the monitored multi-windowed multimedia presentation if a correlation parameter value determined by correlating the selected region with the selected template is greater than or equal to a highlight threshold. In another exemplary implementation, the decision unit 630 is configured to determine that the selected region of the monitored image at a particular horizontal and vertical movement position corresponds to a highlighted content window in the monitored multi-window multimedia presentation if a correlation parameter value determined by correlating between the selected region and the selected template deviates from the highlighting threshold by a specified amount, factor, or the like. Since the parameter values determined by the exemplary template correlator 625 for the selected region of the binary monitor image are indicative of the number of pixels in the selected region of the monitor image having luminance and/or chrominance values greater than or equal to the binary-transformed threshold, the highlight threshold may be specified as the minimum number of pixels having luminance and/or chrominance values greater than or equal to the binary-transformed threshold required to determine the selected region as corresponding to the highlighted content window.
Although an exemplary manner of implementing the highlight window detector 220 of FIG. 2 has been described in FIG. 6, one or more of the elements, steps and/or devices illustrated in FIG. 6 may be combined, separated, rearranged, omitted, deleted and/or implemented in any other way. Further, the example region selector 605, the example luminance comparator 610, the example chrominance comparator 615, the example binary image converter 620, the example template correlator 625, the example decision unit 630, and/or, more generally, the example highlight window detector 220 of the map 6 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example region selector 605, the example luminance comparator 610, the example chrominance comparator 615, the example binary image converter 620, the example template correlator 625, the example decision unit 630, and/or the more general example highlight window detector 220 may be implemented by one or more circuits, programmable processors, Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), and/or Field Programmable Logic Devices (FPLDs), among others. When any of the claims cover a purely software and/or firmware implementation, at least one of the example highlight window detector 220, the example region selector 605, the example luminance comparator 610, the example chrominance comparator 615, the example binary image converter 620, the example template correlator 625, and/or the example decision unit 630 is hereby expressly defined to include a tangible medium, such as a memory, a Digital Versatile Disc (DVD), a Compact Disc (CD), etc., storing such software and/or firmware. Still further, the example highlight window detector 220 of fig. 6 may include one or more elements, steps and/or devices in addition to or in place of those shown in fig. 6, and/or may include more than one of any or all of the above-described elements, steps and devices shown.
Returning to fig. 2, the monitored unit 120 of the illustrated embodiment further includes a channel detector 240 for determining whether a media device (e.g., such as the exemplary media device 105) being monitored by the monitoring unit 120 has been tuned to a particular known channel that can provide a multimedia presentation including a plurality of content windows (e.g., such as the exemplary multimedia presentation 110 including the plurality of content windows 140A-F). In the illustrated embodiment, the channel detector 240 is communicatively coupled to the exemplary highlight window detector 220 and provides an indication that the monitored media device is tuned to a particular known channel of the plurality of tunable channels (e.g., such as the DISH home portal channel) when the exemplary highlight window detector 220 detects that a content window in the monitored multimedia presentation is highlighted. Detection of such a highlighted content window in the monitored multimedia presentation may be a good indication: the media device being monitored by the exemplary monitoring unit 120 has been tuned to a particular known channel that can provide a multi-window multimedia presentation. The exemplary channel detector 240 may also include any other type of channel detection technique capable of determining which channel the monitored media device has been tuned to.
The monitoring unit 120 of the illustrated embodiment further comprises a content identifier 245, the content identifier 245 being for identifying content presented in a highlighted content window of the monitored multi-window multimedia presentation. In the illustrated embodiment, the content identifier 245 is communicatively coupled to the exemplary highlight window detector 220 and is operable to identify only media content present in a particular content window in the monitored multimedia presentation that is detected as being highlighted by the highlight window detector 220. In an exemplary implementation, the content windows included in the monitored multimedia presentation are pre-assigned to a particular broadcast channel (or more specifically, content source). In such embodiments, the content identifier 245 identifies content presented in the highlighted content window detected by the exemplary highlighted window detector 220 as corresponding to the particular broadcast channel or content source assigned to the highlighted content window. In another exemplary implementation, the content identifier 245 employs OCR and/or logo detection to process content presented in the highlighted content window detected by the exemplary highlighted window detector 220 to obtain identifying information (e.g., such as broadcast channel number, name, logo, etc.) that can be used to identify the source of the content presented in the highlighted content window. In yet another exemplary implementation, the content identifier 245 determines audio codes and/or signatures from audio signals emitted by the monitored media device and/or video codes and/or signatures from video signals emitted by the media device to identify content presented in the highlighted content window detected by the exemplary highlight window detector 220. In yet another exemplary implementation, the content identifier 245 identifies content presented in the highlighted content window detected by the exemplary highlight window detector 220 using any combination of pre-designated broadcast channels and/or content sources, OCR and/or logo detection, audio and/or video codes and/or signatures, and the like.
While an example manner of implementing the example monitoring unit 120 of fig. 1 has been illustrated in fig. 2, one or more of the elements, steps and/or devices illustrated in fig. 2 may be combined, separated, rearranged, omitted, deleted and/or implemented in any other way. Further, the example video interface 205, the example audio interface 210, the example network interface 215, the example highlight window detector 220, the example area of interest configuration unit 225, the example template configuration unit 230, the example channel detector 240, the example content identifier 245, and/or more generally, the example monitoring unit 120 of fig. 2 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example video interface 205, the example audio interface 210, the example network interface 215, the example highlight window detector 220, the example area-of-interest configuration unit 225, the example template configuration unit 230, the example channel detector 240, the example content identifier 245, and/or, more generally, the example monitoring unit 120 may be implemented by one or more circuits, programmable processors, Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), and/or Field Programmable Logic Devices (FPLDs), among others. When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the example monitoring unit 120, the example video interface 205, the example audio interface 210, the example network interface 215, the example highlight window detector 220, the example area-of-interest configuration unit 225, the example template configuration unit 230, the example channel detector 240, and/or the example content identifier 245 is hereby expressly defined to include a tangible medium, such as a memory, a Digital Versatile Disk (DVD), a Compact Disk (CD), etc., to store such software and/or firmware. Still further, the example highlight window detector 120 of fig. 2 may include one or more elements, steps and/or devices in addition to or in place of those shown in fig. 2, and/or may include more than one of any or all of the above-described elements, steps and devices shown.
Fig. 7, 8, and 9A-9B illustrate flowcharts representative of example machine readable instructions that may be executed to implement the example media content monitoring system 100, the example monitoring unit 120, the example video interface 205, the example audio interface 210, the example network interface 215, the example highlight window detector 220, the example area of interest configuration unit 225, the example template configuration unit 230, the example channel detector 240, the example content identifier 245, the example area selector 605, the example luminance comparator 610, the example chrominance comparator 615, the example binary image converter 620, the example template correlator 625, and/or the example decision unit 630. In these embodiments, the machine readable instructions represented by the flowcharts may include one or more programs that are executed by: (a) a processor, such as processor 1012 shown in exemplary computer 1000 discussed below in connection with FIG. 10; (b) a controller; and/or (c) any other suitable device. The one or more programs may be implemented as software stored on a tangible medium such as, for example, flash memory, CD-ROM, floppy disk, hard disk, DVD, or memory associated with the processor 1012, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 1012 and/or implemented as firmware or dedicated hardware (e.g., implemented by an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Logic Device (FPLD), discrete logic, etc.). For example, any or all of the example media content monitoring system 100, the example monitoring unit 120, the example video interface 205, the example audio interface 210, the example network interface 215, the example highlight window detector 220, the example region of interest configuration unit 225, the example template configuration unit 230, the example channel detector 240, the example content identifier 245, the example region selector 605, the example brightness comparator 610, the example chrominance comparator 615, the example binary image converter 620, the example template correlator 625, and/or the example decision unit 630 may be implemented by any combination of software, hardware, and/or firmware. Also, some or all of the machine readable instructions represented by the flowcharts of FIGS. 7, 8, and 9A-9B may be implemented manually. Further, although the example machine readable instructions are described with reference to the flowcharts illustrated in fig. 7, 8, and 9A-9B, many other techniques for implementing the example methods and apparatus described herein may alternatively be used. For example, with reference to the flowcharts of fig. 7, 8, and 9A-9B, the order of execution of the blocks may be changed, and/or some of the blocks may be changed, eliminated, combined, and/or subdivided into multiple blocks.
The flow diagram shown in fig. 7 represents example machine readable instructions 700 that may be executed to implement the configuration process of the example monitoring unit 120 of fig. 1 and/or 2. The example machine readable instructions 700 are executed when the example monitoring unit 120 is turned on and/or restarted, when a configuration mode of the example monitoring unit 120 is activated, the like, any combination thereof, and the like. Referring to the example monitoring unit 120 of FIG. 2, the machine-readable instructions 700 begin execution at block 705 of FIG. 7, where the example region of interest configuration unit 225 included in the example monitoring unit 120 configures predetermined region of interest groups corresponding to known locations of various content window groups included in a monitored multimedia presentation. For example, at block 705, the area of interest configuration unit 225 receives area of interest configuration information obtained from the example central processing device 125 and/or the example configuration terminal 135 via the network interface 215. In an exemplary implementation, the retrieved region of interest configuration information includes a quantity, location, size, shape, and/or any other type of information that allows the exemplary region of interest configuration unit 225 to specify the exemplary predetermined group of regions of interest 300 and/or 400 corresponding to the exemplary multimedia presentation 110 having the plurality of content windows 140A-F.
Next, control proceeds to block 710 where the example template configuration unit 230 included in the example monitoring unit 120 configures a set of templates to find one or more content windows in the monitored multimedia presentation that have potentially unknown (and known) locations. For example, at block 710, the example template configuration unit 230 receives template configuration information from the example central processing device 125 and/or the example configuration terminal 135 obtained via the network interface 215. In an exemplary implementation, the retrieved template configuration information includes quantity, size, shape, and/or any other type(s) of information that allows the template configuration unit 230 to specify the example set of templates 500 for detecting highlighted content windows in the example multimedia presentation 110 having the plurality of content windows 140A-F. After the process of block 710 is complete, execution of the example machine readable instructions 700 ends.
The flow diagram shown in fig. 8 represents example machine readable instructions 800 that may be executed to implement the monitoring process of the example monitoring unit 120 of fig. 1 and/or 2. The example machine readable instructions 800 may be executed continuously, at predetermined intervals, upon detection of an event (e.g., such as detection of a channel change event, detection of a remote control button press, detection of a command sent by a remote control or other input device, etc.), or the like, or any combination thereof. Referring to the example monitoring unit 120 of fig. 2, the example machine readable instructions 800 begin execution at block 805 of fig. 8, at which block 805 the example monitoring unit 120 determines whether a preliminary channel detection procedure is available to determine whether a monitored media device (e.g., such as the example media device 105) is tuned to a channel capable of providing a multimedia presentation including a plurality of content windows (e.g., such as the example multimedia presentation 110 including the plurality of content windows 140A-F). If such a preliminary channel detection procedure is available (block 805), control proceeds to block 810 where the example channel detector 240 employs any suitable channel detection technique to determine whether the monitored media device has been tuned to a known (e.g., predetermined or previously identified) channel (also referred to as a "multi-window channel") capable of providing a multimedia presentation including a plurality of content windows. If such a multi-window channel is detected (block 810), or if the preliminary channel detection procedure is not available (block 805), control passes to block 815. Otherwise, control proceeds to block 820.
At block 815, the example monitoring unit 120 performs a highlighted window detection process to determine whether the monitored multimedia presentation (e.g., such as the multimedia presentation 110) includes one or more highlighted content windows (e.g., such as the example highlighted content window 140A). Exemplary machine readable instructions that may be executed to implement the process at block 815 are illustrated in fig. 9A-9B and described in more detail below. Control then proceeds to block 825 where the example monitoring unit 120 determines whether a highlighted content window is detected by the process at block 815. If a highlighted content window is not detected (block 825), control proceeds to block 820. However, if a highlighted content window is detected (block 825), control proceeds to block 830.
At block 830, the example channel detector 240 included in the example monitoring unit 120 determines whether the media device (e.g., the example media device 105) being monitored by the monitoring unit 120 has been tuned to a multi-window channel. For example, at block 830, the example channel detector 240 provides a prompt that the monitored media device is tuned to a particular known channel (e.g., the DISH home portal channel) of the plurality of tunable channels as a result of the processing at block 815 detecting that the content window in the monitored multimedia presentation is highlighted. Control then proceeds to block 835 where the example content identifier 245 included with the example monitoring unit 120 identifies content presented in the highlighted content window detected by the highlighted window detection process at block 815. For example, as described in more detail above, at block 835, the example content identifier 245 may identify content presented in the highlighted content window detected by the processing at block 815 using any one or combination of predetermined broadcast channels and/or content sources, OCR and/or logo detection, audio and/or video codes and/or signatures, and the like.
Next, control proceeds to block 820 where the example monitoring unit 120 determines whether monitoring continues. For example, at block 820, the example monitoring unit 120 may determine whether to continue monitoring based on whether the monitoring unit 120 is configured to monitor continuously, at predetermined intervals, based on detection of a particular event, or the like. If monitoring continues (block 820), control passes back to block 805 and its subsequent blocks to continue the monitoring process. If, however, monitoring does not continue (block 820), execution of the example machine readable instructions 800 ends.
The flow diagrams illustrated in fig. 9A-9B represent example machine readable instructions 815 that may be executed to implement the highlight window detection routine to implement the processing at block 815 of fig. 8, and/or the example highlight window detector 220 of fig. 2 and/or 6. Referring to the example highlight window detector 220 of FIG. 6, the machine-readable instructions 815 begin execution at block 905 of FIG. 9A, where the example highlight window detector 220 acquires a monitored image from the example video interface 205 that corresponds to a display of a monitored media device (e.g., the example media device 105). For example, at block 905, the example video interface 205 may determine (e.g., capture) a surveillance image from the example video input 150 communicatively coupled to the video output of the monitored media device. As an alternative embodiment, at block 905, the example video interface 205 may determine (e.g., capture) a surveillance image from the example camera 155 positioned to view the display of the monitored media device.
Next, control proceeds to block 910 where the example highlight window detector 220 determines whether highlight window detection based on region-of-interest matching is supported (e.g., based on configuration data maintained in the highlight window detector 220 and/or the monitoring unit 120). If highlight window detection based on region of interest matching is supported (block 910), control proceeds to block 915 where the example region selector 605 included in the example highlight window detector 220 selects (e.g., extracts) a predetermined region of interest from the monitored image obtained at block 905. At block 915, the selected (e.g., extracted) region of the surveillance image corresponds to a region of interest selected from a predetermined group of regions of interest configured, for example, by executing the example machine readable instructions 700 of fig. 7.
Control then proceeds to block 920 where the example brightness comparator 610 included in the example highlight window detector 220 processes the selected region of the monitored image determined at block 915 to determine various brightness parameter values for the selected region, as described in more detail above. Additionally or alternatively, at block 920, the example colorimetric comparator 615 included in the example highlight window detector 220 processes the selected region of the monitoring image determined at block 915 to determine various colorimetric parameter values for the selected region, as described in more detail above. Next, control proceeds to block 925 where the example brightness comparator 610 compares the brightness parameter value determined at block 920 for the selected region of the monitored image to a fixed or varying brightness-based highlighting threshold. Additionally or alternatively, at block 925, the example chroma comparator 615 compares the chroma parameter value determined at block 920 for the selected region of the monitored image to a fixed or varying chroma-based highlight threshold. At block 920, the highlight threshold(s) are configured and/or determined to represent a luminance and/or chrominance associated with a human perceptible highlight of at least one of the plurality of content windows included in the monitored multimedia presentation, as described in more detail below.
Control then proceeds to block 930 where the example decision unit 630 included in the example highlight window detector 220 determines whether a comparison of the luma and/or chroma parameter values for the selected region to the respective luma-based and/or chroma-based highlight thresholds indicates that the highlight window detection criteria have been met at block 925. For example, if the comparison determined at block 925 indicates that the determined luminance and/or chrominance parameter values are greater than or equal to the respective highlight thresholds, the example decision unit 630 may determine that the highlighted content window corresponds to the selected region of the monitored image being processed at block 925. Additionally or alternatively, if the comparison results determined at block 925 indicate that the determined luminance and/or chrominance parameters deviate from the respective highlight thresholds by a specified amount, factor, or the like, then at block 925 the exemplary decision unit 630 may determine that the highlighted content window corresponds to a selected region of the monitored image being processed. If the example determination unit 630 determines that the highlight window detection criteria have been met (block 930), then control proceeds to block 935, where the example determination unit 630 indicates that the content window corresponding to the selected region of interest being processed is highlighted in the monitored multimedia presentation at block 935. Execution of the example machine readable instructions 815 is then ended.
However, if the example determination unit 630 determines that the highlight window detection criteria have not been met (block 930), then control proceeds to block 940, where the example region selector 605 included in the example highlight window detector 220 determines whether all of the configured regions of interest have been processed. If all regions of interest have not been processed (block 940), control returns to block 915 and its subsequent blocks, and the example region selector 605 selects the next region of interest in the configured set of regions of interest for processing. However, if all regions of interest have been processed (block 940), or if highlight window detection based on region of interest matching is not supported (block 910), control proceeds to block 945 of FIG. 9B.
Turning to fig. 9B, at block 945, the example highlight window detector 220 determines whether template matching based highlight window detection is supported (e.g., based on configuration data maintained in the highlight window detector 220 and/or the monitoring unit 120). If template matching based highlight window detection is not supported (block 945), execution of the example machine readable instructions 815 is complete. However, if template matching based highlight window detection is supported (block 945), control proceeds to block 950, where the example binary image converter 620 included in the example highlight window detector 220 converts the monitor image obtained at block 905 of FIG. 9A to a binary monitor image. For example, as described in more detail above, at block 950, the example binary image converter 620 uses binary conversion thresholds based on pixel luminance and/or chrominance values to convert grayscale or color pixels of the monitor image to respective binary pixels of the binary monitor image.
Next, control proceeds to block 955 where the example template correlator 625 included in the example highlight window detector 220 selects one template from a set of templates that have been configured by execution of, for example, the example machine readable instructions 700 of FIG. 7. Control then passes to block 960 where the example template correlator 625 shifts the selected template to monitor a combination of horizontally shifted and vertically shifted positions of some or all of the image by overlaying the binary. As has been described in greater detail above, for each horizontal and vertical shift position, the example template correlator 625 selects a respective region of the binary monitor image defined by the template, and correlates the selected region of the binary monitor image with the template to determine a correlation parameter value that represents a number of pixels having luminance and/or chrominance values greater than or equal to a binary-transformed threshold value in the selected region of the monitor image.
Control then proceeds to block 965 where the example decision unit 630 included in the example highlight window detector 220 determines whether the correlation parameter value determined by correlating the binary monitor image with the selected template at any of the horizontally and vertically displaced positions at block 960 indicates that the highlight window detection criteria have been met. For example, at block 965, if the correlation parameter value for that location is greater than or equal to, or alternatively deviates from, the highlight threshold, the example decision unit 630 determines that the parameter values for the correlation of the monitored binary image and the selected template at the particular horizontal and vertical movement positions indicate a content window highlighted at that location of the monitored multimedia presentation. Because the correlation parameter value determined at block 960 by correlating the selected template with the binary monitored image at the particular mobile location represents the number of monitored image pixels within the template region having a luminance value and/or a chrominance value greater than or equal to the binary translation threshold, the highlight threshold may be specified as the minimum number of pixels within the template region having a luminance value and/or a chrominance value greater than or equal to the binary translation threshold required for the region of the monitored image at the particular mobile location to be determined to correspond to the highlighted content window.
If the example determination unit 630 determines that the highlighted window detection criteria have been met (block 965), control proceeds to block 970, where the example determination unit 630 indicates that the highlighted content window is positioned at the particular horizontally and vertically displaced position at which the correlation parameter value meeting the detection criteria was obtained. Execution of the example machine readable instructions 815 is then ended. However, if the example decision unit 630 determines that the highlighted window detection criteria has not been met (block 965), then control proceeds to block 975, where the example template correlator 625 included in the example highlighted window detector 220 determines whether all templates in the configured template set have been processed. If all templates have not been processed (block 975), control returns to block 955 and its subsequent blocks to select the next template in the configured templates for processing of the monitored binary image. However, if all templates have been processed (block 975), execution of the example machine readable instructions 815 is complete.
Fig. 10 is a block diagram of an exemplary computer 1000 capable of implementing the apparatus and methods disclosed herein. The computer 1000 may be, for example, a server, a personal computer, a Personal Digital Assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a personal video recorder, a set-top box, or any other type of computing device.
A straightforward embodiment of the system 1000 includes a processor 1012, such as a general-purpose programmable processor. The processor 1012 includes a local memory 1014 and executes coded instructions 1016 present in the local memory 1014 and/or other storage devices. The processor 1012 may execute the machine readable instructions represented in fig. 7, 8, and/or 9A-9B. The processor 1012 may be any type of processing unit, such as from the IntelSpeed on speedSeries of microprocessors, intelPentiumSeries of microprocessors, intelAntengSeries of microprocessors, and/or BritishTeer (r) medicineOne or more microprocessors in a family of microprocessors. Of course, other processors from other families are also suitable.
The processor 1012 communicates with a main memory including a volatile memory 1018 and a non-volatile memory 1020 via a bus 1022. Static Random Access Memory (SRAM), Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), and/or any other type of random access memory device may be used to implement the volatile memory 1018. Flash memory and/or any other desired type of memory device may be used to implement the non-volatile memory 1020. Access to the main memory 1018, 1020 is typically controlled by a memory controller (not shown).
The computer 1000 also includes an interface circuit 1024. Any type of interface standard, such as an Ethernet (Ethernet) interface, a Universal Serial Bus (USB), and/or a third generation input/output (3 GIO) interface, may be used to implement the interface circuit 1024.
The interface circuit 1024 interfaces with one or more input devices 1026. An input device 1026 allows a user to enter data and commands into the processor 1012. The input device may be implemented by, for example, a keyboard, mouse, touch screen, touchpad, touch ball, isopoint, and/or a voice recognition system.
The interface circuit 1024 also connects to one or more output devices 1028. The output devices 1028 can be implemented, for example, by display devices (e.g., a liquid crystal display, a cathode ray tube display (CRT)), by a printer and/or by speakers. The interface circuit 1024 thus typically includes a graphics card.
The interface circuit 1024 also includes a communication device such as a modem or network card for exchanging data with external computers via a network (e.g., an ethernet connection, a Digital Subscriber Line (DSL), a telephone line, a coaxial cable, a mobile telephone system, etc.).
The computer 1000 also includes one or more mass storage devices 1030 for storing software and data. Examples of such large storage devices 1030 include floppy disk drives, hard disk drives, compact disk drives, and Digital Versatile Disk (DVD) drives. The mass storage 1030 may implement the example monitoring data store 218 and/or the example configuration data store 235. Additionally or alternatively, the volatile memory 1018 may implement the example monitoring data store 218 and/or the example configuration data store 235. Additionally or alternatively, the non-volatile memory 1020 may implement the example configuration data store 235.
As an alternative to implementing the methods and/or apparatus described herein in a system such as the device of fig. 10, the methods and/or apparatus described herein may be embedded in a structure such as a processor and/or ASIC (application specific integrated circuit).
Finally, although certain example methods, apparatus and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the appended claims either literally or under the doctrine of equivalents.

Claims (18)

1. A method of monitoring a media device that displays a media presentation comprising a plurality of media windows, the method comprising the steps of:
determining a first value corresponding to at least one of a luminance or a chrominance of a first region in a surveillance image corresponding to the media presentation displayed by the media device, a shape of the first region representing at least one media window of the plurality of media windows;
comparing the first value to a first threshold to determine whether a first media window associated with a location of the first region in the monitored image is highlighted; and
when the first value is greater than or equal to the first threshold, processing at least one of an audio signal or a video signal provided by the media device to identify first media presented in the first media window that is different from second media presented in a second media window included in the media presentation.
2. The method of claim 1, further comprising at least one of:
capturing the surveillance image from a camera positioned to view the media presentation displayed by the media device; or
Processing the video signal output by the media device to obtain the surveillance image.
3. The method of claim 1, further comprising:
the first region is selected to correspond to one of a plurality of regions of interest, each region of interest corresponding to a respective media window of the plurality of media windows.
4. The method of claim 1, further comprising:
moving a template representing at least one of the plurality of media windows over a plurality of horizontally moved positions and a plurality of vertically moved positions of the monitoring image; and
the first region is selected to correspond to regions of the monitoring image defined by the template that are positioned at a first horizontally-shifted position and a first vertically-shifted position of the template.
5. The method of claim 4, further comprising:
performing a binary conversion of an input image to determine the monitoring image, the binary conversion performed using a second threshold to convert at least one of grayscale pixels or color pixels of the input image to corresponding binary pixels of the monitoring image; and
correlating the first region with the template to determine the first value.
6. The method of claim 5, wherein the second threshold corresponds to at least one of a pixel luminance value or a pixel chrominance value indicating that one of the plurality of media windows is highlighted.
7. The method of claim 6, wherein the first threshold corresponds to a minimum number of pixels in the first region for which at least one of a pixel luminance value or a pixel chrominance value is greater than or equal to the second threshold required to determine the first region as corresponding to the highlighted media window.
8. The method of claim 4, further comprising: when the first value is greater than or equal to the first threshold, indicating that the first media window is highlighted and positioned at the first horizontally moved position and the first vertically moved position.
9. The method of claim 1, wherein the first threshold corresponds to at least one of a luminance or a chrominance indicating that at least one of the plurality of media windows is highlighted.
10. The method of claim 1, further comprising: when the first value is greater than or equal to the first threshold, indicating that the first media window associated with the location of the first region is highlighted.
11. The method of claim 1, further comprising: when the first value is greater than or equal to the first threshold, indicating that the media device is tuned to a first channel of a plurality of tunable channels, the first channel configured to provide media capable of being presented using the plurality of media windows.
12. The method of claim 1, further comprising: when the first value is greater than or equal to the first threshold, performing optical character recognition in the first region of the monitoring image to identify a first media presented in the first media window that is different from a second media presented in a second media window included in the media presentation.
13. The method of claim 1, wherein the plurality of media windows are mapped to a respective plurality of media sources, the method further comprising: identifying a first media presented in the first media window as corresponding to a first media source mapped to the first media window when the first value is greater than or equal to the first threshold, the first media being different from a second media presented in a second media window included in the media presentation.
14. The method of claim 1, further comprising, when the first value is less than the first threshold:
determining a second value representing at least one of a luminance or a chrominance of a second region in the surveillance image, a shape of the second region representing at least one of the plurality of media windows; and
comparing the second value to the first threshold to determine whether a second media window associated with the location of the second region in the monitored image is highlighted.
15. A media device monitoring unit, the media device monitoring unit comprising:
a video interface in communication with at least one of a camera or a media device video output to obtain a surveillance image corresponding to a media presentation displayed by the media device, the media presentation comprising a plurality of media windows;
a highlight window detector in communication with the video interface, the highlight window detector to:
determining a first value corresponding to at least one of a luminance or a chrominance of a first region in the monitoring image, a shape of the first region representing at least one media window of the plurality of media windows; and
comparing the first value to a first threshold to determine whether a first media window associated with a location of the first region in the monitored image is highlighted;
a configuration interface to specify at least one template or a plurality of regions of interest, the at least one template corresponding to a shape of at least one of the plurality of media windows, the plurality of regions of interest corresponding to the plurality of media windows, respectively, the highlight window detector to determine the first region in the surveillance image using the at least one template or the plurality of regions of interest; and
a media identifier in communication with the highlight window detector, the media identifier to process at least one of an audio signal or a video signal provided by the media device to identify a first media presented in the first media window when the highlight window detector detects that the first media window is highlighted, the first media being different from a second media presented in a second media window included in the media presentation.
16. The media device monitoring unit of claim 15, wherein the highlight window detector comprises:
a binary image converter to convert the monitoring image to a binary monitoring image using a second threshold value to convert at least one of grayscale or color pixels of the monitoring image to respective binary pixels of the binary monitoring image, the second threshold value corresponding to a pixel brightness value indicating that one of the plurality of media windows is highlighted; and
a template correlator to:
moving the template specified by the configuration interface at a plurality of horizontal movement positions and a plurality of vertical movement positions of the binary monitoring image;
selecting a first region to correspond to a region of a binary monitoring image defined by the template and located at a first horizontally-shifted position and a first vertically-shifted position of the template; and
correlating the first region with the template to determine the first value; and
a determination unit indicating that the first media window is highlighted and positioned at the first horizontally moved position and the first vertically moved position when the first value is greater than or equal to the first threshold.
17. The media device monitoring unit of claim 15, wherein the highlight window detector comprises:
a region selector that selects the first region in the monitoring image to correspond to a first region of interest of the plurality of regions of interest specified by the configuration interface; and
a brightness comparator to:
determining the first value representing the brightness of the first region in the surveillance image; and
comparing the first value to the first threshold, the first threshold corresponding to a brightness indicating that at least one of the plurality of media windows is highlighted; and
a determination unit to determine whether the first media window associated with the first region of interest is highlighted based on a comparison of the first value and the first threshold.
18. The media device monitoring unit of claim 15, further comprising a channel detector in communication with the highlight window detector, the channel detector to indicate that the media device is tuned to a first channel of a plurality of tunable channels when the highlight window detector detects that the first media window is highlighted, the first channel configured to provide media capable of being presented using the plurality of media windows.
HK13108125.6A 2009-05-29 2013-07-11 Methods and apparatus to monitor a multimedia presentation including multiple content windows HK1180864A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/474,906 2009-05-29

Publications (1)

Publication Number Publication Date
HK1180864A true HK1180864A (en) 2013-10-25

Family

ID=

Similar Documents

Publication Publication Date Title
JP5086393B2 (en) Method and apparatus for monitoring a multimedia presentation including multiple content windows
US8917937B2 (en) Methods and apparatus for identifying primary media content in a post-production media content presentation
JP4202316B2 (en) Black field detection system and method
US11006175B2 (en) Systems and methods for operating a set top box
CN101077014B (en) Method and apparatus for monitoring audio/visual content from various sources
KR102123062B1 (en) Method of aquiring information about contents, image display apparatus using thereof and server system of providing information about contents
CN1666503A (en) System for processing video signals
US20090009532A1 (en) Video content identification using ocr
JP2005524271A (en) System and method for indexing commercials in video presentation
KR20120051208A (en) Method for gesture recognition using an object in multimedia device device and thereof
US20100169919A1 (en) Acquiring cable channel map information in a cable receiver
US20160345006A1 (en) Systems and methods for picture quality monitoring
US8804052B2 (en) System and method for filtering a television channel list based on channel characteristics
JP2016532386A (en) Method for displaying video and apparatus for displaying video
JP2006100881A (en) Recording / reproducing apparatus, recording / reproducing method, and recording / reproducing system
EP2658273A2 (en) Display control device, display control method and program
CN111314734A (en) Information pushing method and device in smart television and controller
HK1180864A (en) Methods and apparatus to monitor a multimedia presentation including multiple content windows
HK1149871A (en) Methods and apparatus to monitor a multimedia presentation including multiple content windows
CN1473426A (en) Method of processing video data to be displayed on screen and apparatus therefor
CN101207743A (en) Broadcast receiving device and method for storing open caption information
AU2012268871B2 (en) Methods and apparatus for identifying primary media content in a post-production media content presentation
JP2009117923A (en) Image processing apparatus, image processing method, and program