WO1997003517A1 - Procedes d'elaboration d'images video composites et appareil correspondant - Google Patents
Procedes d'elaboration d'images video composites et appareil correspondant Download PDFInfo
- Publication number
- WO1997003517A1 WO1997003517A1 PCT/GB1996/001682 GB9601682W WO9703517A1 WO 1997003517 A1 WO1997003517 A1 WO 1997003517A1 GB 9601682 W GB9601682 W GB 9601682W WO 9703517 A1 WO9703517 A1 WO 9703517A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- expected
- colour
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
- H04N5/2723—Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Definitions
- the present invention relates to a system for automatically generating and adding secondary images to primary images of real world scenes in such a way that the secondary image appears to be physically present in the scene represented by the primary image when the composite image is viewed subsequently.
- the invention be applied to the presentation of advertising material (secondary images) within primary images including, but not limited to, television broadcasts, video recordings, cable television programmes and films. It is applicable to all video/TV formats, including analogue and digital video, PAL, NTSC, SECAM and HDTV.
- This type of advertising is particularly applicable to, but is not limited to, live broadcasts of sports events, programmes of highlights of sports events, videos of sports events, live broadcasts of important state events, television broadcasts of "pop" concerts etc.
- Prior practice relating to the placement of advertisements within scenes represented in TV/video images includes: physical advertising hoardings which can be placed at appropriate places in a scene or venue such that they sometimes appear in the images; such hoardings can be either simple printed signs or electromechanical devices allowing the display of several fixed advertisements consecutively; advertisements which are placed directly onto surfaces within the scene, for example, by being painted onto the outfield at a cricket match, or by being placed on players' clothes or by being painted onto racing car bodies; small fixed advertisements, for example, company logos, which are simply superimposed on the image of the scene.
- each physical advertising hoarding can present, at most, a few static images; it cannot be substantially varied during the event, nor can its image be changed after the event other than by a painstaking manual process of editing individual images; advertisements made, for example, on playing surfaces or on participants clothing, have to be relatively discreet otherwise they intrude too much into the event itself; fixed advertisements, such as company logos, superimposed on the image, look artificial and intrusive since they are obviously not part of the scene being viewed.
- the present invention concerns a system whereby secondary images, such as advertising material, can be combined electronically with, for example, a live action video sequence in such a manner that the secondary image appears in the final composite image as a natural part of the original scene.
- the secondary image may appear to be located on a hoarding, while the hoarding in the original scene contains different material or is blank. This allows, for example, different advertising material to be incorporated into the scene to suit different broadcast audiences .
- colour keying also known an “chroma keying”
- a foreground object such as a weather forecaster
- a second video source provides another signal, such as a weather map.
- the two video signals are mixed together so that the second video signal replaces all parts of the first video signal which have the key colour.
- pattern-keying Alternatively, of course, individual frames of the primary image could be edited manually to include the secondary image.
- O93/02524 It has previously been proposed to use video systems of this general type to insert advertising material into video images, one example being disclosed in O93/02524.
- O93/06691 discloses a system having similar capabilities.
- Colour keying works well in very restricted circumstances where the constituent images can be closely controlled, such as in weather forecasting or pre-recorded studio productions. However, it does not work in the general case where it is desired to mix unrestricted background images in parts of unrestricted primary images. The same applies generally to pattern- keying systems. Replacing physical advertising signs by manually editing series of images is not feasible for live broadcasts and is extremely costly even for use with recorded programmes .
- apparatus for generating a composite video image comprising a combination of a first video image of a real world scene and a second video image, such that said second image appears to be superimposed on the surface of an object appearing within said first image, including: at least one camera for generating said first image; means for generating said second image by transforming a preliminary second image to match the size, shape and orientation of said surface as seen in said first image; and means for combining said second image with said first image to produce a composite final image; said apparatus including: means for storing a three-dimensional computer model of the environment containing the real world scene, said model including at least one target space within said environment upon which said second image is to be superimposed; means for generating camera data defining at least the location, orientation and focal length of a camera generating said first image; and means for transforming the preliminary second image on the basis of said model and said camera data so as to match said target space as seen in the first image, prior to combining said first image and said second image.
- One or more cameras 10 are deployed to provide video coverage of an event in a venue, such as a sporting arena (not shown).
- a venue such as a sporting arena (not shown).
- the following discussion relates particularly to "live” coverage, but it will be understood that the invention is equally applicable to processing pre-recorded video images and associated data.
- Each of the cameras 10 is augmented by the addition of a hardware module (not shown) adapted to generate signals containing additional data about the camera, including position and viewing direction in three dimensions, and lens focal length.
- a hardware module (not shown) adapted to generate signals containing additional data about the camera, including position and viewing direction in three dimensions, and lens focal length.
- a wide variety of known devices may be used for providing data about the orientation of a camera (e.g. inclinometers, accelerometers, rotary encoders etc.), as will be readily apparent to those of ordinary skill in the art.
- the video signal from each camera 10 in operation at a particular event is passed to an editing desk 12 as normal, where the signal to be transmitted is selected from among the signals from the various cameras.
- the additional camera data is passed to a modelling module (computer) 14 which has access to a predefined, digital 3-d model of the venue 16.
- the venue model 16 contains representations of all aspects of the venue which are significant for operation of the system, typically including the camera positions and the locations, shapes and sizes of prominent venue features and all "target spaces" onto which secondary images are to be superimposed by the system, such as physical advertising hoardings.
- the modelling module 14 uses the camera location, orientation and focal length data to compute an approximation of the image expected from the camera 10 based .on transformed versions of items forming part of the model 16 which are visible in the camera's current view.
- the modelling module 14 also calculates a pose vector relative to the camera view vector for each of the target spaces visible in the image.
- Target spaces into which the system is required to insert secondary images are referred to herein as "designated targets”.
- the additional camera data is also passed to the secondary image generation module 18 which generates a preliminary secondary image for each designated target in the primary image.
- a library of secondary images is suitably stored in a secondary image database 20, accessible by the secondary image generation module 18.
- the pose of each of the designated targets is fed into a transformation module 22 together with the preliminary secondary images.
- the preliminary secondary images are transformed by the transformation module 22 so that they have the correct perspective appearance (size, shape and orientation) to match the corresponding target space as viewed by the camera 10.
- the original video image and the expected image calculated from the 3-d model 16 are both also passed to a matching module 24.
- the matching module 24 effectively superimposes the calculated expected image over the actual image as a basis for matching the two. It identifies as many as possible of the corners and edges of the target spaces corresponding to the designated targets and any other items of the venue model 16 present in the expected image. It uses these matches to refine the transformational match of the expected image to the actual image. Finally, the matcher extracts any foreground objects and lighting effects from the image areas of the designated targets.
- the original primary image from the editing desk 12, the transformed secondary image and the output data from the matching module 24 are passed to one or more output modules 26 where they are combined to produce a final composite video output, in which the primary and secondary images are combined.
- Each camera is equipped with a device which continuously transmits additional camera data to the central station.
- This camera data could either be transmitted via a separate means such as additional cables or radio links, or could be incorporated into the hidden parts of the video signal in the same way as teletext information. Methods and means for transmitting such data are well known.
- This camera data typically includes some or all of: a camera identifier; the camera position; the camera orientation; the lens focal length; the lens focusing distance; the camera aperture.
- the camera identifier is a string of characters which uniquely identifies each camera in use.
- the camera position is a set of three coordinate values giving the position of the camera in the coordinate system in use in the 3-d venue model.
- the camera orientation is another set of three values, defining the direction in which the camera is pointing. For example, this could be made up of three angles defining the camera viewing direction in the coordinate system used to define the camera position.
- the coordinate system used is not critical as long as all the cameras in use at a particular event supply the camera data in a way which is understood by the modelling and transformation modules.
- the lens focal length is required to define the scene for the purposes of secondary image transformation.
- the lens focusing distance and camera aperture are also required to define the scene for the purposes of transforming the secondary image in terms of which parts of the scene are in focus .
- the additional devices with which each camera is equipped may depend on the role of the camera.
- a particular camera may be fixed in position but adjustable in orientation.
- a calibration procedure may be used which results in an operator entering the camera's position into the device before the event starts .
- the orientation would be determined continuously by the device as would the focal length, focusing distance and aperture.
- the model may be based on a normal orthogonal 3-d coordinate system.
- the coordinate system origin used at a particular venue may be global or local in nature. For example, if the venue is a soccer stadium, it may be convenient to take the centre spot as the origin and to take the half-way line to define one axis direction, with an imaginary line running down the centre of the pitch defining a second axis direction. The third axis would then be a vertical line through the centre spot.
- Each relevant permanent item of the venue is represented within the model in a way which encapsulates the item's important features for the purposes of the present system.
- the physical advertisement it is preferable that the surface properties are stored using a scale-invariant representation in order to simplify the matching process
- prominent permanent venue features it is useful to the matching process if prominent features are included in the venue model; these may be stored as solid objects with surface properties (for example, if a grandstand contains a series of vertical pillars, then these could be used in the matching process to improve the accuracy of the process).
- the object of the signal processing performed by the system is to identify the position of the designated targets in the current image, to extract any foreground objects and lighting effects relevant to the designated targets, then to generate secondary images and insert them into the current primary image in place of the designated targets such that they look completely natural.
- the signal processing takes place in the following stages . 1. Use the camera data in conjunction with the venue model to generate an expected image incorporating all the objects in the venue model which are expected to be seen in the actual image and to calculate the pose of each of the visible designated targets relative to the camera (modelling module 14). 2. Identify as many as possible of the expected objects in the actual image (matching module 24). 3. Use the individual item matches to refine the view details of the expected image (matching module) . 4.
- Matching module 24 Project the borders of the designated targets onto the real image and refine the border positions, where appropriate with reference to edges and corners in the actual image (matching module 24). 5. Match the expected designated target image to the corresponding region in the actual image, the match to be performed separately in colour space and intensity space. Any missing regions in the colour space match are assumed to be foreground objects. The bounding subregion of the target region is extracted and stored. The stored region includes colour and intensity information. Any mismatch regions occurring in intensity space only, e.g. shadows, which are not part of foreground objects are extracted and stored as intensity variations (matching module 24). 6. Store the outcome of the matching process for use in matching the next frame. 7. Transform the scale-invariant designated target model to fit the best estimate bounding region (transform module 22). 8. Reassemble as many outgoing video signals as required by inserting the transformed secondary images into the original primary image and then reinserting foreground objects and lighting effects (output module) . Matching Module
- the matching module 24 has several related functions.
- the matcher first compares the expected view with the actual image to match corners and edges of items in the expected view with corresponding corners and edges in the actual image. This is greatly simplified by the fact that the expected image should be very close to the same view of the scene as the actual image.
- the object of this phase of matching is to correlate regions of the actual image with designated targets in the expected image. Corners are particularly beneficial in this part of the process since a corner match provides two constraints on the overall transformation whilst an edge match provides only one. Since the colour of the objects in the expected image is known from their representation in the venue model, this provides a further important clue in the matching process.
- the outcome of the first phase of matching is a detailed mapping of the expected image onto the actual image.
- the second stage of matching is to deal with each designated target in turn to identify its exact boundary in the image and any foreground objects or lighting effects affecting the appearance of the corresponding physical object or area in the original image. This is done by using the corner and edge matches and interpolating any missing sections of the boundary of the original object/area using the projected boundary of the designated target. For example, if the designated target is a rectangular advertising hoarding, then as long as sufficient segments of the boundary of the hoarding are identified, the position of the remaining segments can be calculated using the known segments and the known shape and size of the hoarding together with the known transformation into the image.
- the final stage of the matching process involves identifying foreground objects and lighting effects within the region of each designated target. This is based on transforming the scale invariant representation of the designated target in the venue model such that it fits exactly the bounding region of the corresponding ad in the original image.
- a match in colour space is then carried out within the bounding region to identify sections of the image which do not match the corresponding sections of the transformed model. These non-matching sections are taken to be foreground objects and these parts of the image are extracted and stored to be superimposed on top of the transformed secondary image in the final composite image.
- a match in intensity space is also carried out to identify intensity variations which are not part of the original object/area. These are considered to be lighting effects and an intensity transformation is used to extract these and keep them for later use in transforming the secondary image.
- the output from the matching process includes: the exact image boundary of all the designated targets; foreground objects in any of these regions; and lighting effects in any of these regions.
- this database may include information such as : the percentage of the available advertising space- time has been booked by each advertiser; any preferences on which part of the event's duration and which part of the venue are to be used for each advertiser; associations of particular secondary images with potential occurrences in the event being covered.
- Generating a particular advertisement for display in the present system may take place in the following stages: choose the company whose advertisement will be displayed; choose which of the selected company's advertisements is appropriate for the current context; transform the stored representation of the selected advertisement to match the available region of the image.
- the selection of the advertiser the destination of the video signal concerned is first determined. This indexes the advertisers for the output module 26 corresponding to that destination. Next, a check is made to see how much advertising time each advertiser has had during the event so far relative to how much they have booked. The advertiser is selected on this basis, taking account of advertiser preferences such as location and timing.
- the next stage the selection of one advertisement from a set supplied by the advertiser to replace a designated target in the original image, is based on factors including: the size of the space available; the location of the designated target; the phase of the event; any notable occurrences during the event.
- an advertiser may choose to supply some advertisements containing a lot of detail and some which are very simple. If the space available is large, perhaps because the camera concerned is showing a close up of a soccer player about to take a corner and the advertising space available fills a large part of the image, then it may be appropriate to fit a more detailed advertisement where the details will be visible. At the other extreme, if a particular camera is showing a long view, then it may be better to select a very simple advertisement with strong graphics so that the advertisement is legible on the screen.
- the selection of advertisements can be influenced by what has happened in the event. For example, say a particular player, X, has just scored a goal. Then an advertiser who manufactures drink, Y, may want to display something to the effect that "X drinks Y" . To meet this need the system has the capability to store advertisements which are only active (i.e. available for selection) when a particular event ha ⁇ taken place. Additionally, these advertisements can have place holders where the name of a participant or some other details can be entered when the ad is made active. This could be useful if drinks advertiser Y has a contract with a whole team. Then when any team member does something exceptional, that team member's name, or other designation, could be inserted into the advertisement.
- advertisements there is no restriction on advertisements being static. As long as the advertisement still looked as though it was part of the event, it could be completely dynamic. For example, an advertising video could be inserted into a suitable designated target.
- a suitable designated target One particular case might be where the venue concerned has a large playback screen, such as at many cricket and athletics events. The screen would be used to show replays of the event to the spectators present, but it could also be a designated target for the present system. Such a screen would then be a good candidate for showing video advertising material .
- a further aspect of the process of secondary image generation relates to how to change images.
- a camera is panning, then different secondary images can be included as different parts of the venue come into the image.
- one camera will be used for a particularly long time and it and it may be desirable to change the secondary images in the composite image part way through the shot. This is accomplished by simulating the change of a physical ad.
- the secondary image generation process may simulate the operation of a physical hoarding, for example, by appearing to rotate segments of a hoarding to switch from one ad to the next.
- the pose of the physical advertising space relative to the camera concerned is known from the additional camera data and the 3-d venue model 16.
- transforming the scale-invariant representation of the chosen secondary image into a 2-d image region with the correct perspective appearance is a straightforward task.
- the secondary image has to fit the target space exactly.
- the region bounding the space is supplied by the matching process.
- transforming the ad involves: using the additional camera data and 3-d venue model 16 to calculate the perspective appearance of the secondary image (this is done in the modelling module 14); . using the matching information to scale the secondary image to fit the space available.
- the secondary image is now ready to be dropped into the original video image.
- One output module 26 is required for each outgoing video signal. Hence, if the final of the World Cup is being transmitted to 100 countries which have been split into 10 areas for advertising, then ten output modules would be required.
- the output module 26 takes one set of secondary images and inserts them into the original primary image. It then takes the foreground object and lighting effects generated by the matching process and reintegrates them. In the case of the foreground objects, this requires parts of the inserted secondary images to be overwritten with the foreground objects. In the case of lighting effects, such as shadows, the image segments containing the secondary image must be modified such that the secondary image looks as if it is subjected to the same lighting effects as the corresponding part of the original scene. This is done by separating out the colour and intensity information and modifying them appropriately. Methods for doing this are well known in the field of computer graphics. Use of the present invention has many benefits for advertisers, particularly at large international events.
- Another area of prior art involves a human operator manually selecting the areas to be replaced and performing various functions to deal with foreground objects and lighting effects. This method is very time consuming and expensive and obviously not applicable to live broadcasts.
- Another area of prior art specifies automatic replacement of an advertising logo using the pose of the identified logo to transform the virtual ad (WO93/06691) .
- this method does not describe any way of dealing with foreground objects or lighting effects.
- the main advantages of the present invention over the prior art are considered to be: augmentation of cameras and the use of a full 3-d venue model to enable generation of an expected image and reliable and fast matching of the expected image to an actual image without relying on colour keying or extensive searching or analysis of the actual image; use of the full 3-d venue model together with the additional camera data to eliminate the need to estimate the pose of physical ads from the image data; separation of the video signal into colour and intensity images for separate treatment of foreground objects and lighting effects; use of corner and edge detection and matching as the basis for superimposing expected image segments over actual image segments; use of stored scale-invariant representations of the physical designated targets to greatly simplify identification of foreground objects and lighting effects.
- the present invention is much more generally applicable than those based on the prior art.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Studio Circuits (AREA)
Abstract
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU64655/96A AU6465596A (en) | 1995-07-13 | 1996-07-15 | Methods and apparatus for producing composite video images |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB9514313.7 | 1995-07-13 | ||
| GBGB9514313.7A GB9514313D0 (en) | 1995-07-13 | 1995-07-13 | Live-ads |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO1997003517A1 true WO1997003517A1 (fr) | 1997-01-30 |
Family
ID=10777578
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/GB1996/001682 Ceased WO1997003517A1 (fr) | 1995-07-13 | 1996-07-15 | Procedes d'elaboration d'images video composites et appareil correspondant |
Country Status (3)
| Country | Link |
|---|---|
| AU (1) | AU6465596A (fr) |
| GB (1) | GB9514313D0 (fr) |
| WO (1) | WO1997003517A1 (fr) |
Cited By (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2319686A (en) * | 1996-11-19 | 1998-05-27 | Sony Corp | Combining image data obtained using the same camera control information from a video camera and from a video recorder |
| GB2330265A (en) * | 1997-10-10 | 1999-04-14 | Harlequin Group Limited The | Image compositing using camera data |
| WO2000014959A1 (fr) * | 1998-09-04 | 2000-03-16 | Sportvision Systems, Llc | Procede et systeme permettant d'ameliorer la presentation video d'un evenement en direct |
| KR20000054304A (ko) * | 2000-06-01 | 2000-09-05 | 이성환 | 방송 중계 영상 화면에 광고를 삽입하는 시스템 및 그제어방법 |
| EP1032148A3 (fr) * | 1999-02-15 | 2002-07-24 | Advent Television Ltd. | Système électronique pour introduire un message publicitaire et pour l'émission de ce message |
| EP1416727A1 (fr) * | 2002-10-29 | 2004-05-06 | Accenture Global Services GmbH | Publicités virtuelles mobiles |
| AU780494B2 (en) * | 1999-11-08 | 2005-03-24 | Vistas Unlimited, Inc. | Method and apparatus for real time insertion of images into video |
| US6965397B1 (en) | 1999-11-22 | 2005-11-15 | Sportvision, Inc. | Measuring camera attitude |
| EP1349382A3 (fr) * | 2002-03-29 | 2006-04-05 | Canon Kabushiki Kaisha | Procédé et dispositif de traitment d'information d'images |
| EP1260939A3 (fr) * | 2001-03-21 | 2006-08-09 | Sony Computer Entertainment Inc. | Procédé de traitement de données |
| EP1536378A3 (fr) * | 2003-11-28 | 2006-11-22 | Topcon Corporation | Procédé et appareil d'affichage tridimensionnel des modèles générés à partir d'images stéréo |
| US7206434B2 (en) | 2001-07-10 | 2007-04-17 | Vistas Unlimited, Inc. | Method and system for measurement of the duration an area is included in an image stream |
| US7209577B2 (en) | 2005-07-14 | 2007-04-24 | Logitech Europe S.A. | Facial feature-localized and global real-time video morphing |
| US7230653B1 (en) | 1999-11-08 | 2007-06-12 | Vistas Unlimited | Method and apparatus for real time insertion of images into video |
| EP1865455A1 (fr) * | 2006-06-07 | 2007-12-12 | Seac02 S.r.l. | Système de publicité numérique |
| US20080010083A1 (en) * | 2005-07-01 | 2008-01-10 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Approval technique for media content alteration |
| US20080052104A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Group content substitution in media works |
| EP1994742A4 (fr) * | 2006-03-07 | 2010-05-05 | Sony Comp Entertainment Us | Remplacement et insertion dynamiques d'accessoires de scène cinématiques dans un contenu de programme |
| US20110078096A1 (en) * | 2009-09-25 | 2011-03-31 | Bounds Barry B | Cut card advertising |
| US7920717B2 (en) | 2007-02-20 | 2011-04-05 | Microsoft Corporation | Pixel extraction and replacement |
| US8451380B2 (en) | 2007-03-22 | 2013-05-28 | Sony Computer Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| EP1527713B2 (fr) † | 2001-08-30 | 2013-07-31 | The Procter and Gamble Company | Conseil en couleur de cheveux |
| US8549554B2 (en) | 2006-03-07 | 2013-10-01 | Sony Computer Entertainment America Llc | Dynamic replacement of cinematic stage props in program content |
| US8566865B2 (en) | 2006-03-07 | 2013-10-22 | Sony Computer Entertainment America Llc | Dynamic insertion of cinematic stage props in program content |
| US20130329799A1 (en) * | 2012-06-08 | 2013-12-12 | Apple Inc. | Predictive video coder with low power reference picture transformation |
| US8988609B2 (en) | 2007-03-22 | 2015-03-24 | Sony Computer Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US20150296170A1 (en) * | 2014-04-11 | 2015-10-15 | International Business Machines Corporation | System and method for fine-grained control of privacy from image and video recording devices |
| US9583141B2 (en) | 2005-07-01 | 2017-02-28 | Invention Science Fund I, Llc | Implementing audio substitution options in media works |
| CN112153451A (zh) * | 2020-09-01 | 2020-12-29 | 广州汽车集团股份有限公司 | 一种车辆使用说明的展示方法及智能终端 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO1993006691A1 (fr) * | 1991-09-18 | 1993-04-01 | David Sarnoff Research Center, Inc. | Fusion d'images video utilisant l'incrustation par modele-cle |
| WO1995010919A1 (fr) * | 1993-02-14 | 1995-04-20 | Orad, Inc. | Procede et appareil de detection, identification et incorporation de publicite dans une video |
| US5491517A (en) * | 1994-03-14 | 1996-02-13 | Scitex America Corporation | System for implanting an image into a video stream |
-
1995
- 1995-07-13 GB GBGB9514313.7A patent/GB9514313D0/en active Pending
-
1996
- 1996-07-15 WO PCT/GB1996/001682 patent/WO1997003517A1/fr not_active Ceased
- 1996-07-15 AU AU64655/96A patent/AU6465596A/en not_active Abandoned
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO1993006691A1 (fr) * | 1991-09-18 | 1993-04-01 | David Sarnoff Research Center, Inc. | Fusion d'images video utilisant l'incrustation par modele-cle |
| WO1995010919A1 (fr) * | 1993-02-14 | 1995-04-20 | Orad, Inc. | Procede et appareil de detection, identification et incorporation de publicite dans une video |
| US5491517A (en) * | 1994-03-14 | 1996-02-13 | Scitex America Corporation | System for implanting an image into a video stream |
Cited By (56)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2319686B (en) * | 1996-11-19 | 2000-10-25 | Sony Corp | Combining a second image with a first image from a recording/reproducing device using the same camera control information |
| GB2319686A (en) * | 1996-11-19 | 1998-05-27 | Sony Corp | Combining image data obtained using the same camera control information from a video camera and from a video recorder |
| US6476874B1 (en) | 1996-11-19 | 2002-11-05 | Sony Corporation | Apparatus and method for combining background images with main images |
| GB2330265A (en) * | 1997-10-10 | 1999-04-14 | Harlequin Group Limited The | Image compositing using camera data |
| US6597406B2 (en) | 1998-09-04 | 2003-07-22 | Sportvision, Inc. | System for enhancing a video presentation of a live event |
| US6266100B1 (en) * | 1998-09-04 | 2001-07-24 | Sportvision, Inc. | System for enhancing a video presentation of a live event |
| EP1110382A4 (fr) * | 1998-09-04 | 2002-02-20 | Sportvision Inc | Procede et systeme permettant d'ameliorer la presentation video d'un evenement en direct |
| WO2000014959A1 (fr) * | 1998-09-04 | 2000-03-16 | Sportvision Systems, Llc | Procede et systeme permettant d'ameliorer la presentation video d'un evenement en direct |
| EP1032148A3 (fr) * | 1999-02-15 | 2002-07-24 | Advent Television Ltd. | Système électronique pour introduire un message publicitaire et pour l'émission de ce message |
| EP1981268A2 (fr) | 1999-11-08 | 2008-10-15 | Vistas Unlimited Inc. | Procédé et appareil pour insertion en temps réel d'images dans une vidéo |
| US7230653B1 (en) | 1999-11-08 | 2007-06-12 | Vistas Unlimited | Method and apparatus for real time insertion of images into video |
| EP1981268A3 (fr) * | 1999-11-08 | 2009-11-04 | Vistas Unlimited Inc. | Procédé et appareil pour insertion en temps réel d'images dans une vidéo |
| AU780494B2 (en) * | 1999-11-08 | 2005-03-24 | Vistas Unlimited, Inc. | Method and apparatus for real time insertion of images into video |
| EP1250803B1 (fr) * | 1999-11-08 | 2008-07-16 | Vistas Unlimited Inc. | Procede et dispositif destines a une insertion en temps reel d'images dans une video |
| US6965397B1 (en) | 1999-11-22 | 2005-11-15 | Sportvision, Inc. | Measuring camera attitude |
| KR20000054304A (ko) * | 2000-06-01 | 2000-09-05 | 이성환 | 방송 중계 영상 화면에 광고를 삽입하는 시스템 및 그제어방법 |
| US7145569B2 (en) | 2001-03-21 | 2006-12-05 | Sony Computer Entertainment Inc. | Data processing method |
| EP1260939A3 (fr) * | 2001-03-21 | 2006-08-09 | Sony Computer Entertainment Inc. | Procédé de traitement de données |
| US7206434B2 (en) | 2001-07-10 | 2007-04-17 | Vistas Unlimited, Inc. | Method and system for measurement of the duration an area is included in an image stream |
| US9631978B2 (en) | 2001-08-30 | 2017-04-25 | Hfc Prestige International Holding Switzerland S.A.R.L | Method for a hair colour consultation |
| EP1527713B2 (fr) † | 2001-08-30 | 2013-07-31 | The Procter and Gamble Company | Conseil en couleur de cheveux |
| EP3043548A1 (fr) * | 2002-03-29 | 2016-07-13 | Canon Kabushiki Kaisha | Procédé et appareil de traitement d'informations |
| US7212687B2 (en) | 2002-03-29 | 2007-05-01 | Canon Kabushiki Kaisha | Method and apparatus for processing information |
| EP1349382A3 (fr) * | 2002-03-29 | 2006-04-05 | Canon Kabushiki Kaisha | Procédé et dispositif de traitment d'information d'images |
| EP1416727A1 (fr) * | 2002-10-29 | 2004-05-06 | Accenture Global Services GmbH | Publicités virtuelles mobiles |
| US7746377B2 (en) | 2003-11-28 | 2010-06-29 | Topcon Corporation | Three-dimensional image display apparatus and method |
| EP1536378A3 (fr) * | 2003-11-28 | 2006-11-22 | Topcon Corporation | Procédé et appareil d'affichage tridimensionnel des modèles générés à partir d'images stéréo |
| US20080052104A1 (en) * | 2005-07-01 | 2008-02-28 | Searete Llc | Group content substitution in media works |
| US20080010083A1 (en) * | 2005-07-01 | 2008-01-10 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Approval technique for media content alteration |
| US9583141B2 (en) | 2005-07-01 | 2017-02-28 | Invention Science Fund I, Llc | Implementing audio substitution options in media works |
| US7397932B2 (en) | 2005-07-14 | 2008-07-08 | Logitech Europe S.A. | Facial feature-localized and global real-time video morphing |
| US7209577B2 (en) | 2005-07-14 | 2007-04-24 | Logitech Europe S.A. | Facial feature-localized and global real-time video morphing |
| EP2523192A1 (fr) * | 2006-03-07 | 2012-11-14 | Sony Computer Entertainment America LLC | Remplacement dynamique d'accessoires de scène cinématiques dans un contenu de programme |
| US8549554B2 (en) | 2006-03-07 | 2013-10-01 | Sony Computer Entertainment America Llc | Dynamic replacement of cinematic stage props in program content |
| US8566865B2 (en) | 2006-03-07 | 2013-10-22 | Sony Computer Entertainment America Llc | Dynamic insertion of cinematic stage props in program content |
| EP1994742A4 (fr) * | 2006-03-07 | 2010-05-05 | Sony Comp Entertainment Us | Remplacement et insertion dynamiques d'accessoires de scène cinématiques dans un contenu de programme |
| US9038100B2 (en) | 2006-03-07 | 2015-05-19 | Sony Computer Entertainment America Llc | Dynamic insertion of cinematic stage props in program content |
| US8860803B2 (en) | 2006-03-07 | 2014-10-14 | Sony Computer Entertainment America Llc | Dynamic replacement of cinematic stage props in program content |
| EP1865455A1 (fr) * | 2006-06-07 | 2007-12-12 | Seac02 S.r.l. | Système de publicité numérique |
| US7920717B2 (en) | 2007-02-20 | 2011-04-05 | Microsoft Corporation | Pixel extraction and replacement |
| US8988609B2 (en) | 2007-03-22 | 2015-03-24 | Sony Computer Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US9872048B2 (en) | 2007-03-22 | 2018-01-16 | Sony Interactive Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US10715839B2 (en) | 2007-03-22 | 2020-07-14 | Sony Interactive Entertainment LLC | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US9237258B2 (en) | 2007-03-22 | 2016-01-12 | Sony Computer Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US10531133B2 (en) | 2007-03-22 | 2020-01-07 | Sony Interactive Entertainment LLC | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US9497491B2 (en) | 2007-03-22 | 2016-11-15 | Sony Interactive Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US9538049B2 (en) | 2007-03-22 | 2017-01-03 | Sony Interactive Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US10003831B2 (en) | 2007-03-22 | 2018-06-19 | Sony Interactvie Entertainment America LLC | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US8451380B2 (en) | 2007-03-22 | 2013-05-28 | Sony Computer Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US8665373B2 (en) | 2007-03-22 | 2014-03-04 | Sony Computer Entertainment America Llc | Scheme for determining the locations and timing of advertisements and other insertions in media |
| US20110078096A1 (en) * | 2009-09-25 | 2011-03-31 | Bounds Barry B | Cut card advertising |
| US9769473B2 (en) * | 2012-06-08 | 2017-09-19 | Apple Inc. | Predictive video coder with low power reference picture transformation |
| US20130329799A1 (en) * | 2012-06-08 | 2013-12-12 | Apple Inc. | Predictive video coder with low power reference picture transformation |
| US9571785B2 (en) * | 2014-04-11 | 2017-02-14 | International Business Machines Corporation | System and method for fine-grained control of privacy from image and video recording devices |
| US20150296170A1 (en) * | 2014-04-11 | 2015-10-15 | International Business Machines Corporation | System and method for fine-grained control of privacy from image and video recording devices |
| CN112153451A (zh) * | 2020-09-01 | 2020-12-29 | 广州汽车集团股份有限公司 | 一种车辆使用说明的展示方法及智能终端 |
Also Published As
| Publication number | Publication date |
|---|---|
| GB9514313D0 (en) | 1995-09-13 |
| AU6465596A (en) | 1997-02-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO1997003517A1 (fr) | Procedes d'elaboration d'images video composites et appareil correspondant | |
| EP0595808B1 (fr) | Images televisees a indices inseres selectionnes | |
| US5737031A (en) | System for producing a shadow of an object in a chroma key environment | |
| US7158666B2 (en) | Method and apparatus for including virtual ads in video presentations | |
| EP0683961B1 (fr) | Procede et appareil de detection, identification et incorporation de publicite dans une video | |
| US6335765B1 (en) | Virtual presentation system and method | |
| US5903317A (en) | Apparatus and method for detecting, identifying and incorporating advertisements in a video | |
| US8457350B2 (en) | System and method for data assisted chrom-keying | |
| US9160938B2 (en) | System and method for generating three dimensional presentations | |
| US8922718B2 (en) | Key generation through spatial detection of dynamic objects | |
| US20130278727A1 (en) | Method and system for creating three-dimensional viewable video from a single video stream | |
| US20040194123A1 (en) | Method for adapting digital cinema content to audience metrics | |
| EP1463331A1 (fr) | Système et procédé pour modifier le contenu des images de cinéma numérique | |
| KR20160048178A (ko) | 영상 프로덕션을 만들기 위한 방법 및 그 시스템 | |
| KR20030082889A (ko) | 텔레비젼 카메라를 이용한 가시적 대상물 촬영화면 수정방법 | |
| EP0993204B1 (fr) | Système d'incrustation couleur en studio | |
| JP4668500B2 (ja) | リアル・タイムで画像をビデオに挿入する方法および装置 | |
| EP0850536A2 (fr) | Procede et dispositif d'incrustation d'images dans une sequence video | |
| Hulth et al. | IBR camera system for live TV sport productions | |
| IL104725A (en) | System for exchanging sections of video background with virutal images | |
| HK1028857A (en) | Chroma keying studio system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA |
|
| DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
| WWE | Wipo information: entry into national phase |
Ref document number: 1996924077 Country of ref document: EP |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| WWW | Wipo information: withdrawn in national office |
Ref document number: 1996924077 Country of ref document: EP |
|
| REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: CA |