[go: up one dir, main page]

WO2001043427A1 - Indicating positional information in a video sequence - Google Patents

Indicating positional information in a video sequence Download PDF

Info

Publication number
WO2001043427A1
WO2001043427A1 PCT/NO2000/000420 NO0000420W WO0143427A1 WO 2001043427 A1 WO2001043427 A1 WO 2001043427A1 NO 0000420 W NO0000420 W NO 0000420W WO 0143427 A1 WO0143427 A1 WO 0143427A1
Authority
WO
WIPO (PCT)
Prior art keywords
positions
data
camera
calculated
video
Prior art date
Application number
PCT/NO2000/000420
Other languages
French (fr)
Inventor
Harald K. Moengen
Original Assignee
Spotzoom As
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spotzoom As filed Critical Spotzoom As
Priority to AU17439/01A priority Critical patent/AU1743901A/en
Publication of WO2001043427A1 publication Critical patent/WO2001043427A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0021Tracking a path or terminating locations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • G01S13/878Combination of several spaced transmitters or receivers of known location for determining the position of a transponder or a reflector
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0021Tracking a path or terminating locations
    • A63B2024/0025Tracking the path or location of one or more users, e.g. players of a game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/10Positions
    • A63B2220/13Relative positions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/74Systems using reradiation of radio waves, e.g. secondary radar systems; Analogous systems
    • G01S13/76Systems using reradiation of radio waves, e.g. secondary radar systems; Analogous systems wherein pulse-type signals are transmitted
    • G01S13/78Systems using reradiation of radio waves, e.g. secondary radar systems; Analogous systems wherein pulse-type signals are transmitted discriminating between different kinds of targets, e.g. IFF-radar, i.e. identification of friend or foe
    • G01S13/785Distance Measuring Equipment [DME] systems

Definitions

  • the present invention relates to a method and a system in connection with television and video production for registering and storing the positions of one or more natural objects and subsequently employing these stored positions in order to generate synthetic or composite video sequences where such positions are represented alphanumerically or graphically.
  • EP-A-0 252 215 discloses a method for showing two subsequent events in display surfaces located beside each other.
  • the method is particularly intended for displaying sports events such as, e.g., skiing and skating competitions and jumping events.
  • the method states in particular that the events which are displayed simultaneously are synchronised in time or displayed in parallel in space.
  • US patent 5,264,933 discloses a method for altering video images by inserting pictures, text or the like in a manner which makes it seem as if the inserted picture is a part of the original picture independently of panning and zooming of the camera.
  • the method makes it possible to alter advertising panels in a sports arena in order to adapt them to suit a given public.
  • the present invention is based on the realisation that on the basis of knowledge of the position of an object which is to be incorporated in a video image, one is no longer dependent on the object in question being filmed with corresponding camera angles, panning and zooming. Instead, a synthetic object can be generated which is placed inside the video image concerned. The positioning of the object in the picture can be calculated by means of the position of the natural object which the synthetic object is to represent, in relation to the camera's position and in relation to camera angle and zoom.
  • video and video production in this context refer both to live broadcasts for television and other productions for television and video.
  • the position is determined of all the objects which will subsequently be represented in a video sequence. These objects may, for example, be competitors in a sports arrangement, or other objects whose position it will be possible to use for subsequent generation of a display in a video sequence.
  • the term natural object will be employed for such objects and the direct representation thereof in video sequences, while the term synthetic object will be employed for an object which is displayed in a video sequence and which represents in a suitable manner registered position data for the natural objects.
  • the transmitters may be active radio transmitters based on battery operation, or they may be active or passive transceivers.
  • transponders without batteries are employed, preferably implemented by means of acoustic surface wave technology (SAW technology).
  • SAW technology acoustic surface wave technology
  • the number of position detectors which are necessary for determining the position of the objects which are equipped with such transmitters or transponders is dependent on the nature of the area in which the objects are located. If the area in substantially flat and relatively limited, two detectors should suffice in order to achieve an unambiguous position determination. If, however, the area is three-dimensional, i.e. if there are three coordinates x,y,z which are to be found, or the extent of the area is relatively large in relation to the detectors' positions, it may be necessary to provide up to four position detectors which are not located in the same plane in order to find the position of a given natural object.
  • a restriction in the number of position detectors to fewer than is strictly necessary for achieving an unambiguous mathematical solution when calculating the object's position assumes that alternative solutions can be ruled out since they are not meaningful, for example because they are located under the ground, outside the arena or the like. Additional position detectors will be necessary if the extent of the area is such that not every position in the area is within the range of all the position detectors, or the area is divided into several subareas, such as, for example, several passing points along a ski run.
  • the objects' positions are calculated on the basis of their position in a video image when this video image is filmed by a camera with a known position, known camera angle (tilt and panning) and known zoom angle (coverage angle).
  • the distance of the objects from the camera cannot be determined, but only the direction from the camera to the object, and the stored position, or the direction, can therefore only be used for generating display of a synthetic object representing the object, in video sequences filmed by the same camera. If, however, the objects are filmed with at least two cameras, thus making it possible to determine a direction from these cameras' positions to the objects' positions, it will also be possible by means of this embodiment to calculate the objects' actual positions.
  • These embodiments require the respective objects to be capable of being identified in a video image by means of image processing and, for example, pattern recognition, but they do not require the deployment of separate position detectors.
  • sequences of data are registered representing the positions of one or more natural objects over a given period of time.
  • the position of the camera which has filmed the video sequence in which the synthetic object is to be inserted and this camera's direction and use of zoom, it is possible to calculate the synthetic object's position in the video image.
  • the necessary calculations will be explained by means of vectors, but it should be understood that the use of equivalent mathematical methods such as, for example, matrices is also covered by the invention.
  • the stored positions represent samples of positions, preferably at fixed time intervals, and are often connected to a timer system
  • a time reference will exist linked to each individual stored position. This means that it is possible to find an absolute or relative point of time for each individual stored position, or conversely it is possible to find a stored position for any point of time within the period for which an object's position is stored, with the accuracy permitted by the sampling intervals. In the same way a point of time will naturally be defined for all natural objects which are represented in real time.
  • fig. 1 is a principle drawing illustrating the layout of a system according to the invention
  • figs. 2a-d illustrate vector representation of positions relative to a display surface
  • fig. 3 illustrates a video camera which can be employed when implementing the invention
  • fig. 4 illustrates a transponder for use in position finding in a possible embodiment of the invention
  • fig. 5 illustrates an alternative method for position finding according to the present invention
  • fig. 6 illustrates examples of video images generated by means of the present invention.
  • Figure 1 is a principle drawing of a system according to the invention.
  • three detectors 1 , 2, 3 are located around an arena 5.
  • Use is also made of two cameras 7, 8 which are located along one longitudinal side of the arena.
  • a natural object 10 whose position is to be registered over a given period.
  • These registered positions can subsequently be used to generate synthetic objects representing the position of said natural object in video sequences which are filmed by one of the two cameras 7, 8.
  • the natural object may be an athlete, a football or any other object.
  • the three detectors which are employed in this example are capable of detecting the distance to the object 10.
  • the registered distance is transmitted via a data bus 1 1 , or by means of another known per se form of data transmission, to a device which by means of the registered distances and the respective detectors' known positions, calculates the object's position in the form of coordinates x ⁇ y ⁇ z ⁇ n a defined coordinate system. It will be convenient to define an orthogonal coordinate system with the result that the arena is located substantially in the x,y plane, while the z-axis is perpendicular to this plane.
  • the cameras 7, 8 will also have positions which are unambiguously defined in said coordinate system. It will thus be possible to describe the position of an object in the arena relative to one of the cameras in the form of a vector OV, , OV 2 with length corresponding to the distance between camera and object and direction from the camera to the object. These vectors will be referred to as object vectors. Furthermore, the cameras will be equipped with devices for registering camera angles and zoom angles. These too can be expressed as vectors. These vectors may have fixed or arbitrary length, and both will begin at the point which is defined as the camera's position.
  • the camera vector KV l 5 KV 2 represents the camera's direction and will have a direction perpendicularly inside the picture plane, while the zoom vector ZV,, ZV 2 is defined so as to define the outer edge of the video picture.
  • the camera vector's length may express the degree of zoom.
  • the zoom vector may, for example, be expressed as the sum of the camera vector and a vector which is perpendicular to the camera vector.
  • zoom vector and object vector it will be possible to establish whether a given position in the coordinate system lies inside or outside the video picture from a camera, and possibly where in the video picture said position will be.
  • Figure 2 illustrates these relationships in further detail. Since the video picture will normally be rectangular, a single zoom vector will not define the entire picture's outer edge. In order to be able to determine whether the object vector OV is located between the zoom vector ZV and the camera vector KV, it is therefore necessary to find a zoom vector ZV which is located in the same plane as object vector OV and camera vector KV. The plane in question will be known since both the object vector OV and the camera vector KV are known, and it will then be possible to determine the zoom vector ZV on the basis of the camera's characteristics.
  • Figure 2a illustrates these vectors in relation to a picture field I
  • figure 2b illustrates the same vectors projected down on to the xy plane.
  • Figures 2c and 2d illustrate a corresponding representation of a situation where the object vector OV is not located between the camera vector KV and the zoom vector ZV.
  • the position of the object concerned in the picture field I will be at a point on the straight line from the centre of the picture field to the picture's outer edge where it is intersected by the zoom vector, and the position on this line can be found by means of the angles between the respective vectors.
  • the zoom vector ZV is located between the camera vector KV and the object vector, as illustrated in figures 2c and 2d, however, the object's position will be located outside the picture field I. However, it will still be possible to calculate a position in the picture plane, and it will be possible to employ this position to find a direction out of the picture which can form the basis for an indication of the object's position.
  • FIG. 3 illustrates a video camera 7, 8 for use in implementing the invention.
  • a schematic illustration is given here of how camera vector KV and zoom vector ZV are produced.
  • the incident light will naturally be refracted so that it is focused on the video detector CCD.
  • the zoom vector on the other hand, will begin at a point which will be located in the optical plane 16 and which may be located in front of or behind this detector, depending on the use of zoom and focusing. In order to simplify the subsequent calculations, however, it is an acceptable approximation to assume that the zoom vector always begins at the same point, and that this point is common for zoom vector, camera vector and the camera's position.
  • the camera 7, 8 will comprise a camera head 17 with angle scales for determining the camera's direction (panning and tilt), together with means for registering values for zoom and focusing (not illustrated). These registered values will be intercepted by a unit 20 which is adapted to transmit them via a data bus 1 la or another suitable transmission medium, thus enabling them to be received and used by the production equipment in connection with the further steps in the invention. Similarly, the registered video signal will be transmitted from the video camera to the production equipment via a suitable transmission path 1 lb.
  • These two transmission paths 1 1a, l ib can be realised in a number of ways, and they can be physically separated or form the same physical connection. In a preferred embodiment it will be natural to separate video signals and data signals, at least on two logically separate channels, in which case it will be natural for all the registered data to be transmitted on a data network, while the video transmission is performed via a video network.
  • the object or objects whose positions are to be detected may be equipped with transmitters or with transponders.
  • a transponder will preferably be used which utilises acoustic surface wave technology (SAW technology).
  • SAW technology acoustic surface wave technology
  • a transponder of this kind is illustrated in figure 4.
  • the transponder comprises a substrate 20 which, e.g., may be composed of a crystal such as lithium niobite which has a surface pattern of metal composed of transducers, reflectors, etc.
  • a polling pulse from a position detector ( 1 , 2, 3, fig. 1 ) is received by the antenna (not illustrated) to a transducer 21 , which is illustrated here in the form of a so-called interdigital transducer.
  • the received electromagnetic energy in the polling pulse is converted in the transducer 21 into an acoustic surface wave which moves along the substrate's surface.
  • reflectors 22 At a certain distance from the transducer 21 there are placed reflectors 22. When the acoustic surface wave hits the reflectors, reflections are created which move back towards the transducer 21.
  • the transducer will convert the reflected waves to electromagnetic pulses which form the response signal which is transmitted via the transponder's antenna.
  • absorbers 23 may be provided to prevent undesirable reflections.
  • the number and location of the reflectors 22 will be able to ensure that each transponder transmits a unique return signal. Thus it will be possible, for example, to equip each of the participants in a sports arrangement with his/her unique transponder.
  • a detector receives a return signal after first having transmitted a polling pulse, it will be possible to determine the distance to the transponder which has returned the return signal from the time it takes from the polling pulse is transmitted until the reply signal is received, taking into account any delay in the transponder, while the transducer's identity can be established on the basis of the characteristics of the return signal.
  • transponders may be employed with different delays, with the result that no transponders which are located in the area in question will produce return signals which collide in time.
  • Figure 5 illustrates an alternative method for determining the position of a natural object, where figure 5a illustrates a camera K, which is filming a natural object 10, while figure 5b illustrates a video image I filmed by this camera.
  • a direction vector RV is determined from a camera K, to the natural object 10.
  • this direction vector will only have unit length (the vector's length therefore does not express the distance to the object).
  • the camera K will be equipped with sensors in the camera head 17 which detect with very high accuracy the direction in which the camera is pointing. This direction will indicate the camera vector KV. In the camera the degree of zoom employed is also registered.
  • the object is identified whose position is to be determined, and a position is established for this in the display surface.
  • image processing techniques such as pattern recognition
  • the object is identified whose position is to be determined, and a position is established for this in the display surface.
  • the direction vector RV will thereby also be determined. This direction vector will define a straight line through the camera's position and the position of the natural object.
  • the position of the natural object will thus not be unambiguously defined, but it will always be possible to calculate the position of this object in any picture filmed by the same camera at the same position, as long as camera vector and zoom use (coverage angle) are known for this camera.
  • the position can be found by finding the zoom vector ZV for the camera in the same way, with the result that this vector remains located in the same plane as the direction vector RV and the camera vector KV. If the direction vector RV is located between the zoom vector ZV and the camera vector KV, it will be possible to find the position in the picture by means of the angles between the respective vectors.
  • these positions or directions will be stored in a register together with a time reference.
  • These stored data will represent the positions of the respective objects, either in the form of coordinates in a coordinate system, or in the form of directions from a defined position, preferably the position of a camera.
  • these positions will preferably be stored in a system 13 which comprises at least a memory for data representing these positions, and also a computing unit for performing the calculations described in this description together with video production equipment.
  • the registered positions it will be possible, for example, to settle special doubtful situations in connection with sport and athletics, such as offside decisions in football, clarification of controversial cases in connection with goal scoring and the like.
  • a system designed according to the invention will thereby be capable of solving such tasks in a manner which per se is previously known.
  • the s astern will comprise equipment which enables a synthetic representation of natural objects to be generated, based on the detected position of this object.
  • natural object will be employed to indicate an actual physical object or the display of such an object in a video sequence
  • the term synthetic object will indicate a generated symbol indicating the position of a natural object when the natural object itself is not shown in the video sequence or is only shown in a manner which is difficult for an observer to detect.
  • the detected position of an object can be used for generating a synthetic object in a video sequence in real time, i.e. for indicating the natural object's position when it cannot be easily seen on a video or television screen. It may, for example, involve a skier who is hidden behind a cluster of trees, an Alpine skier whose position requires to be shown in a general view of an entire downhill run, or an ice hockey puck or golf ball which on account of its size and speed is difficult to follow with the eyes.
  • Such a utilisation of the system according to the present invention will correspond to that which is stated in the applicant's previous Norwegian patent no. 303.310.
  • the present invention offers a number of new possibilities which are achieved by means of the stored data for the positions of the natural objects.
  • the position of an object at a given time can be indicated in a video sequence which was filmed at another time, as illustrated in figure 6a.
  • This may be employed, for example, by inserting in a television picture, which shows an athlete 30 as he/she is passing an intermediate station, a synthetic object 31 showing the position of a second athlete who passed the same intermediate station at an earlier time.
  • the synthetic object's 31 position in the picture surface is calculated as indicated above. It will then be natural to show the position of this second athlete when he/she had been in action for the same amount of time as the athlete who is in the process of passing the intermediate station.
  • the synthetic object's position in the display surface is calculated, for example, as indicated in the description of figures 2a and 2b.
  • the actual synthetic object may assume any form whatever.
  • the synthetic object will preferably not be a close copy of the natural object it represents, since this may make it difficult for viewers to distinguish between the representation of natural and synthetic objects on the television screen.
  • it will be desirable to employ known per se picture processing technology to analyse the colours in the part of the picture where the synthetic object is to be placed in order to ensure that the synthetic object has a colour which is in sharp contrast to the background.
  • FIG 6c By means of the stored positions for the object represented by the synthetic object and the real-time registered position of the natural object 30 which is shown in the video sequence concerned and which is compared with the synthetic object 33, it will be possible to calculate the distance between these two positions. This distance may either be calculated as the straight line between the two points in the defined coordinate system representing the positions in question, or the distance can be calculated along a defined track between these points. This track will, for example, correspond to the course a skier must follow in order to cover the distance between the two positions. It will then be possible to display the calculated distance together with the synthetic object 33, either as part of the synthetic object, or in a separate area on the video or television screen where the video sequence concerned is shown.
  • the difference is calculated between the current value of the time for the competitor concerned (the natural object's aggregate time for the position it is in in real time) and the corresponding aggregate time for the competitor represented by the synthetic object when it was in approximately the same position.
  • the distance in space it will be possible to display the distance in time together with the synthetic object.
  • the position which is to be displayed in the form of a synthetic object is located outside the display surface for the video sequence concerned, it is even more appropriate to illustrate the distance between the two positions in the form of an indication of time and/or distance. Furthermore, it is possible to generate a graphic illustration of this distance, for example in the form of a line or a column. By this means it is also possible to generate a comparison of more than two objects, e.g. a plurality of competitors in a race. A possible example of such a comparison is illustrated in figure 6d, where on the basis of the position of a competitor 30 a bar chart 34 is generated illustrating the distance to other competitors. The distance may be defined as distance in time when they were in the same position, or the distance in space when they had a corresponding time.
  • the illustrated distance may also be the actual distance at the relevant time back to the other competitors if real time positions exist for them, or possibly the distance back in time to when the competitor concerned, if he/she is leading the race, was in the respective positions in which the other competitors are at the moment.
  • Figure 6e illustrates an example of a graphic representation 35 of a development over time, for example of a long distance race in track and field or a cross-country skiing race, generated by means of the present invention.
  • the distance between two or more competitors can be calculated as described above for a desired number of positions or times in the race, in principle for each individual stored position, and this may be illustrated, for example in the form of curves.
  • These curves may be normalised with regard to one competitor or a pre-issued schedule represented by a straight line, and the competitors' distance to this competitor or this schedule will be illustrated as curves located above or below this straight line.
  • the example in figure 6e illustrates time differences at various positions in the race, but it will of course also be possible to show distance differences at various times during the race.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Circuits (AREA)

Abstract

A method and a system for generating synthetic or composite video images or video sequences where registered positions for natural objects are represented graphically or alphanumerically. The method is based on detection of the positions of natural objects and storing data representing these positions together with associated time reference in a memory, and subsequent processing of these stored data in order to generate suitable representations of the data and possibly differences between them in a video image or video sequence. The system comprises a subsystem for detecting positions and storing them, together with a subsystem for retrieving these stored data from a memory and processing them in order to generate the desired video images or video sequences.

Description

INDICATING POSITIONAL INFORMATION IN A VIDEO SEQUENCE
The present invention relates to a method and a system in connection with television and video production for registering and storing the positions of one or more natural objects and subsequently employing these stored positions in order to generate synthetic or composite video sequences where such positions are represented alphanumerically or graphically.
When broadcasting arrangements where there is a desire to compare events which take place at different times if they were occurring simultaneously, such as, for example, in various sports arrangements, the traditional method has involved the need to compare aggregate times or to show several small pictures within one television picture. Attempts have also been made recently to generate a composite picture where a section from one sequence is inserted into another. This generally involves comparing two competitors who have started at different times, but whose passing of a reference point are to be shown as if the runners had started simultaneously. Other alternatives are sports such as Alpine skiing events, where it is required to show how one competitor is placed relative to a previous competitor, or events such as the high jump and pole vaulting or ski jumping, where it is desirable to show a visual comparison between two or more competitors.
EP-A-0 252 215 discloses a method for showing two subsequent events in display surfaces located beside each other. The method is particularly intended for displaying sports events such as, e.g., skiing and skating competitions and jumping events. The method states in particular that the events which are displayed simultaneously are synchronised in time or displayed in parallel in space.
Recently, times systems have been presented where this method is further developed by combining a section of a stored video sequence with a current video sequence instead of being shown in parallel with it. Methods and systems for composite pictures or video sequences are known from, amongst others, US patent 4,602,286 and US patent 5,099,331.
US patent 5,264,933 discloses a method for altering video images by inserting pictures, text or the like in a manner which makes it seem as if the inserted picture is a part of the original picture independently of panning and zooming of the camera. The method makes it possible to alter advertising panels in a sports arena in order to adapt them to suit a given public.
These previously known techniques allow video pictures to be manipulated in various ways in order to increase or alter the amount of information contained in the picture. What is described with regard to parallel display of events taking place at different times, however, is limited by the fact that it is necessary to have video sequences which must be, or at least must be capable of being made almost identical with respect to camera angles, panning and zooming. The indicated method where additional information can be adapted to such conditions is limited to the ability to display additional information which is generated in advance, and not information representing events which are taking place, for example, while a sports arrangement is unfolding.
The present invention, however, is based on the realisation that on the basis of knowledge of the position of an object which is to be incorporated in a video image, one is no longer dependent on the object in question being filmed with corresponding camera angles, panning and zooming. Instead, a synthetic object can be generated which is placed inside the video image concerned. The positioning of the object in the picture can be calculated by means of the position of the natural object which the synthetic object is to represent, in relation to the camera's position and in relation to camera angle and zoom. Thus it is the object of the invention to provide a method and a system for inserting in a video sequence a representation of one or more objects based on stored information concerning these objects' positions.
For the record it should be pointed out that the terms video and video production in this context refer both to live broadcasts for television and other productions for television and video.
It has further been realised that by means of this knowledge of positions it is also possible to calculate distances in time or space, and these distances can also be displayed alphanumerically or graphically on a screen instead of or in addition to a representation of the respective objects.
From FR-B-2.726.370 a method is known for finding the positions of the players and the ball on a football pitch. Both players and ball are equipped with transmitters, and receivers are located aro ind the pitch for determining said positions. The registered positions are then used for registering errors such as offside, and for analysing the game. Similar methods are described in WO 93/01867, WO 95/10337 and WO 95/08816. None of these publications indicates any registration and use of position data in such a manner that events which take place simultaneously can be compared or used for generating a composite or synthetic video sequence.
In the present invention the position is determined of all the objects which will subsequently be represented in a video sequence. These objects may, for example, be competitors in a sports arrangement, or other objects whose position it will be possible to use for subsequent generation of a display in a video sequence. In the following description the term natural object will be employed for such objects and the direct representation thereof in video sequences, while the term synthetic object will be employed for an object which is displayed in a video sequence and which represents in a suitable manner registered position data for the natural objects.
In a first embodiment all the objects are equipped with a transmitter. The transmitters may be active radio transmitters based on battery operation, or they may be active or passive transceivers. In a preferred embodiment transponders without batteries are employed, preferably implemented by means of acoustic surface wave technology (SAW technology).
The number of position detectors which are necessary for determining the position of the objects which are equipped with such transmitters or transponders is dependent on the nature of the area in which the objects are located. If the area in substantially flat and relatively limited, two detectors should suffice in order to achieve an unambiguous position determination. If, however, the area is three-dimensional, i.e. if there are three coordinates x,y,z which are to be found, or the extent of the area is relatively large in relation to the detectors' positions, it may be necessary to provide up to four position detectors which are not located in the same plane in order to find the position of a given natural object. A restriction in the number of position detectors to fewer than is strictly necessary for achieving an unambiguous mathematical solution when calculating the object's position assumes that alternative solutions can be ruled out since they are not meaningful, for example because they are located under the ground, outside the arena or the like. Additional position detectors will be necessary if the extent of the area is such that not every position in the area is within the range of all the position detectors, or the area is divided into several subareas, such as, for example, several passing points along a ski run.
In an alternative embodiment of the invention the objects' positions are calculated on the basis of their position in a video image when this video image is filmed by a camera with a known position, known camera angle (tilt and panning) and known zoom angle (coverage angle). In this alternative embodiment the distance of the objects from the camera cannot be determined, but only the direction from the camera to the object, and the stored position, or the direction, can therefore only be used for generating display of a synthetic object representing the object, in video sequences filmed by the same camera. If, however, the objects are filmed with at least two cameras, thus making it possible to determine a direction from these cameras' positions to the objects' positions, it will also be possible by means of this embodiment to calculate the objects' actual positions. These embodiments, however, require the respective objects to be capable of being identified in a video image by means of image processing and, for example, pattern recognition, but they do not require the deployment of separate position detectors.
On the basis of the calculations mentioned above, according to the invention sequences of data are registered representing the positions of one or more natural objects over a given period of time. On the basis of the relations between these stored positions, the position of the camera which has filmed the video sequence in which the synthetic object is to be inserted, and this camera's direction and use of zoom, it is possible to calculate the synthetic object's position in the video image. In this description the necessary calculations will be explained by means of vectors, but it should be understood that the use of equivalent mathematical methods such as, for example, matrices is also covered by the invention.
On the basis of the stored positions, it will also be possible to calculate other values than positions in a display surface. These values may be represented alphanumerically or graphically in a video sequence.
Since the stored positions represent samples of positions, preferably at fixed time intervals, and are often connected to a timer system, a time reference will exist linked to each individual stored position. This means that it is possible to find an absolute or relative point of time for each individual stored position, or conversely it is possible to find a stored position for any point of time within the period for which an object's position is stored, with the accuracy permitted by the sampling intervals. In the same way a point of time will naturally be defined for all natural objects which are represented in real time.
The invention will now be described in more detail in the form of embodiments, with reference to the attached drawings, in which
fig. 1 is a principle drawing illustrating the layout of a system according to the invention; figs. 2a-d illustrate vector representation of positions relative to a display surface; fig. 3 illustrates a video camera which can be employed when implementing the invention; fig. 4 illustrates a transponder for use in position finding in a possible embodiment of the invention;
fig. 5 illustrates an alternative method for position finding according to the present invention;
fig. 6 illustrates examples of video images generated by means of the present invention.
Figure 1 is a principle drawing of a system according to the invention. In this example three detectors 1 , 2, 3 are located around an arena 5. Use is also made of two cameras 7, 8 which are located along one longitudinal side of the arena. In the arena there is located a natural object 10 whose position is to be registered over a given period. These registered positions can subsequently be used to generate synthetic objects representing the position of said natural object in video sequences which are filmed by one of the two cameras 7, 8. The natural object may be an athlete, a football or any other object. The three detectors which are employed in this example are capable of detecting the distance to the object 10. The registered distance is transmitted via a data bus 1 1 , or by means of another known per se form of data transmission, to a device which by means of the registered distances and the respective detectors' known positions, calculates the object's position in the form of coordinates x^y^z^n a defined coordinate system. It will be convenient to define an orthogonal coordinate system with the result that the arena is located substantially in the x,y plane, while the z-axis is perpendicular to this plane.
As with the detectors 1 , 2, 3, the cameras 7, 8 will also have positions which are unambiguously defined in said coordinate system. It will thus be possible to describe the position of an object in the arena relative to one of the cameras in the form of a vector OV, , OV2 with length corresponding to the distance between camera and object and direction from the camera to the object. These vectors will be referred to as object vectors. Furthermore, the cameras will be equipped with devices for registering camera angles and zoom angles. These too can be expressed as vectors. These vectors may have fixed or arbitrary length, and both will begin at the point which is defined as the camera's position. The camera vector KVl 5 KV2 represents the camera's direction and will have a direction perpendicularly inside the picture plane, while the zoom vector ZV,, ZV2 is defined so as to define the outer edge of the video picture. The camera vector's length may express the degree of zoom. Thus the zoom vector may, for example, be expressed as the sum of the camera vector and a vector which is perpendicular to the camera vector.
By means of camera vector, zoom vector and object vector it will be possible to establish whether a given position in the coordinate system lies inside or outside the video picture from a camera, and possibly where in the video picture said position will be.
Figure 2 illustrates these relationships in further detail. Since the video picture will normally be rectangular, a single zoom vector will not define the entire picture's outer edge. In order to be able to determine whether the object vector OV is located between the zoom vector ZV and the camera vector KV, it is therefore necessary to find a zoom vector ZV which is located in the same plane as object vector OV and camera vector KV. The plane in question will be known since both the object vector OV and the camera vector KV are known, and it will then be possible to determine the zoom vector ZV on the basis of the camera's characteristics. Figure 2a illustrates these vectors in relation to a picture field I, while figure 2b illustrates the same vectors projected down on to the xy plane. Figures 2c and 2d illustrate a corresponding representation of a situation where the object vector OV is not located between the camera vector KV and the zoom vector ZV.
If the object vector OV is located between the camera vector KV and the zoom vector ZV, as is the case in figures 2a and 2b, the position of the object concerned in the picture field I will be at a point on the straight line from the centre of the picture field to the picture's outer edge where it is intersected by the zoom vector, and the position on this line can be found by means of the angles between the respective vectors. If the zoom vector ZV is located between the camera vector KV and the object vector, as illustrated in figures 2c and 2d, however, the object's position will be located outside the picture field I. However, it will still be possible to calculate a position in the picture plane, and it will be possible to employ this position to find a direction out of the picture which can form the basis for an indication of the object's position.
A more detailed description of this technique can be found in the applicant's Norwegian patent NO 303.310.
Figure 3 illustrates a video camera 7, 8 for use in implementing the invention. A schematic illustration is given here of how camera vector KV and zoom vector ZV are produced. The incident light will naturally be refracted so that it is focused on the video detector CCD. The zoom vector, on the other hand, will begin at a point which will be located in the optical plane 16 and which may be located in front of or behind this detector, depending on the use of zoom and focusing. In order to simplify the subsequent calculations, however, it is an acceptable approximation to assume that the zoom vector always begins at the same point, and that this point is common for zoom vector, camera vector and the camera's position. The camera 7, 8 will comprise a camera head 17 with angle scales for determining the camera's direction (panning and tilt), together with means for registering values for zoom and focusing (not illustrated). These registered values will be intercepted by a unit 20 which is adapted to transmit them via a data bus 1 la or another suitable transmission medium, thus enabling them to be received and used by the production equipment in connection with the further steps in the invention. Similarly, the registered video signal will be transmitted from the video camera to the production equipment via a suitable transmission path 1 lb. These two transmission paths 1 1a, l ib can be realised in a number of ways, and they can be physically separated or form the same physical connection. In a preferred embodiment it will be natural to separate video signals and data signals, at least on two logically separate channels, in which case it will be natural for all the registered data to be transmitted on a data network, while the video transmission is performed via a video network.
As already described, the object or objects whose positions are to be detected may be equipped with transmitters or with transponders. A transponder will preferably be used which utilises acoustic surface wave technology (SAW technology). A transponder of this kind is illustrated in figure 4. The transponder comprises a substrate 20 which, e.g., may be composed of a crystal such as lithium niobite which has a surface pattern of metal composed of transducers, reflectors, etc. A polling pulse from a position detector ( 1 , 2, 3, fig. 1 ) is received by the antenna (not illustrated) to a transducer 21 , which is illustrated here in the form of a so-called interdigital transducer. The received electromagnetic energy in the polling pulse is converted in the transducer 21 into an acoustic surface wave which moves along the substrate's surface. At a certain distance from the transducer 21 there are placed reflectors 22. When the acoustic surface wave hits the reflectors, reflections are created which move back towards the transducer 21. The transducer will convert the reflected waves to electromagnetic pulses which form the response signal which is transmitted via the transponder's antenna. At the ends of the transponder surface wave absorbers 23 may be provided to prevent undesirable reflections.
The number and location of the reflectors 22 will be able to ensure that each transponder transmits a unique return signal. Thus it will be possible, for example, to equip each of the participants in a sports arrangement with his/her unique transponder. When a detector receives a return signal after first having transmitted a polling pulse, it will be possible to determine the distance to the transponder which has returned the return signal from the time it takes from the polling pulse is transmitted until the reply signal is received, taking into account any delay in the transponder, while the transducer's identity can be established on the basis of the characteristics of the return signal.
In the event of approximately simultaneous detection of several response signals, in order to obtain unambiguous detection it may be expedient to use special detection techniques, e.g. based on correlation between detected, coded response pulse seqi ences and prestored code sequences for each individual transponder. Another possibility is to use polling signals on different frequencies and corresponding frequency-tuned transponders. Finally, transponders may be employed with different delays, with the result that no transponders which are located in the area in question will produce return signals which collide in time.
Figure 5 illustrates an alternative method for determining the position of a natural object, where figure 5a illustrates a camera K, which is filming a natural object 10, while figure 5b illustrates a video image I filmed by this camera. By means of this alternative, a direction vector RV is determined from a camera K, to the natural object 10. In contrast to the object vector found in the embodiment of the invention described above, this direction vector will only have unit length (the vector's length therefore does not express the distance to the object). In the same way as described with reference to figure 3, the camera K, will be equipped with sensors in the camera head 17 which detect with very high accuracy the direction in which the camera is pointing. This direction will indicate the camera vector KV. In the camera the degree of zoom employed is also registered. By means of image processing techniques, such as pattern recognition, the object is identified whose position is to be determined, and a position is established for this in the display surface. Based on how powerful a zoom is being employed and the direction to the camera vector KV, it is possible to determine a direction for the zoom vector ZV which goes from the camera's position through the display surface's outer edge where it is intersected by a straight line from the picture surface's centre through the location of the natural object in the display surface I. The direction vector RV will thereby also be determined. This direction vector will define a straight line through the camera's position and the position of the natural object. The position of the natural object will thus not be unambiguously defined, but it will always be possible to calculate the position of this object in any picture filmed by the same camera at the same position, as long as camera vector and zoom use (coverage angle) are known for this camera. The position can be found by finding the zoom vector ZV for the camera in the same way, with the result that this vector remains located in the same plane as the direction vector RV and the camera vector KV. If the direction vector RV is located between the zoom vector ZV and the camera vector KV, it will be possible to find the position in the picture by means of the angles between the respective vectors.
If two or more cameras are used to film the same object simultaneously, it should be possible correspondingly to find two or more direction vectors, as illustrated in figure 5c. Provided that measurements and calculations are performed with satisfactory accuracy, these vectors will define lines which intersect each other in the position of the natural object. Based on the positions of the respective cameras, it then becomes possible to register the natural object's position absolutely in a coordinate system. It will of course be possible to supplement these data with registered data for focusing of the respective cameras. A focusing registration of this kind, however, will be an extremely imprecise form of distance measurement.
Independently of which method is used for registering the positions of the objects concerned, if only one camera is employed and no distance detectors, according to the invention these positions or directions will be stored in a register together with a time reference. These stored data will represent the positions of the respective objects, either in the form of coordinates in a coordinate system, or in the form of directions from a defined position, preferably the position of a camera. Referring again to figure 1 , these positions will preferably be stored in a system 13 which comprises at least a memory for data representing these positions, and also a computing unit for performing the calculations described in this description together with video production equipment. By means of this equipment it will be possible to generate a synthetic picture 14 which can be mixed with or displayed alternately with a regular video picture, thus forming a composite video picture or video sequence 15. Various possible synthetic pictures and arrangements of this kind will be described below, with reference to figures 6a - 6e.
By means of the registered positions, it will be possible, for example, to settle special doubtful situations in connection with sport and athletics, such as offside decisions in football, clarification of controversial cases in connection with goal scoring and the like. A system designed according to the invention will thereby be capable of solving such tasks in a manner which per se is previously known. Furthermore, the s astern will comprise equipment which enables a synthetic representation of natural objects to be generated, based on the detected position of this object. In this context the term natural object will be employed to indicate an actual physical object or the display of such an object in a video sequence, while the term synthetic object will indicate a generated symbol indicating the position of a natural object when the natural object itself is not shown in the video sequence or is only shown in a manner which is difficult for an observer to detect.
The detected position of an object can be used for generating a synthetic object in a video sequence in real time, i.e. for indicating the natural object's position when it cannot be easily seen on a video or television screen. It may, for example, involve a skier who is hidden behind a cluster of trees, an Alpine skier whose position requires to be shown in a general view of an entire downhill run, or an ice hockey puck or golf ball which on account of its size and speed is difficult to follow with the eyes. Such a utilisation of the system according to the present invention will correspond to that which is stated in the applicant's previous Norwegian patent no. 303.310.
In addition, the present invention offers a number of new possibilities which are achieved by means of the stored data for the positions of the natural objects. By means thereof, for example, the position of an object at a given time can be indicated in a video sequence which was filmed at another time, as illustrated in figure 6a. This may be employed, for example, by inserting in a television picture, which shows an athlete 30 as he/she is passing an intermediate station, a synthetic object 31 showing the position of a second athlete who passed the same intermediate station at an earlier time. The synthetic object's 31 position in the picture surface is calculated as indicated above. It will then be natural to show the position of this second athlete when he/she had been in action for the same amount of time as the athlete who is in the process of passing the intermediate station. In other words, this means that if the two athletes are skiers who have started at exactly five minute intervals, it will be the position of the skier who started first as it was five minutes ago, which is shown in the form of a synthetic object 31 in the television picture. The synthetic object's position in the display surface is calculated, for example, as indicated in the description of figures 2a and 2b.
In the same way it will be possible to show the position of a skater from a previous pair as this position was after exactly the same length of time in the course of the race as for the pair in question who are in action. The public will thereby be able to obtain a direct and visual comparison of how the race is developing for one skater compared to the that of a skater who has already completed his race. If the position which is to be indicated by means of the synthetic object is located outside the display surface, it will be possible, as illustrated in figure 6b, to indicate the position by letting the synthetic object be an arrow 32 or other kind of cursor pointing out of the video picture towards the position concerned. The direction of this arrow is found by finding the straight line between the centre of the picture and the object's position in the picture plane, as indicated above under the description of figures 2c and 2d.
The actual synthetic object may assume any form whatever. The synthetic object will preferably not be a close copy of the natural object it represents, since this may make it difficult for viewers to distinguish between the representation of natural and synthetic objects on the television screen. On the contrary, in order to make the synthetic object easier to see, it will be desirable to employ known per se picture processing technology to analyse the colours in the part of the picture where the synthetic object is to be placed in order to ensure that the synthetic object has a colour which is in sharp contrast to the background.
We refer now to figure 6c. By means of the stored positions for the object represented by the synthetic object and the real-time registered position of the natural object 30 which is shown in the video sequence concerned and which is compared with the synthetic object 33, it will be possible to calculate the distance between these two positions. This distance may either be calculated as the straight line between the two points in the defined coordinate system representing the positions in question, or the distance can be calculated along a defined track between these points. This track will, for example, correspond to the course a skier must follow in order to cover the distance between the two positions. It will then be possible to display the calculated distance together with the synthetic object 33, either as part of the synthetic object, or in a separate area on the video or television screen where the video sequence concerned is shown. Where there is a well-de ned and meaningful connection between position and time, as will be the c. se in various kinds of races such as ski races, Alpine events and speed skating, but not for events such as ski jumping, pole vaulting and ball games, it will also be possible to calculate a distance in time between the natural and the synthetic object. This distance can be defined in a number of ways. A result of having an absolute or relative time defined for each stored position is that an aggregate time can be calculated for a competitor for each such stored position. An alternative is therefore to calculate the difference between the individual aggregate time for the two objects at approximately the same position. In other words, the difference is calculated between the current value of the time for the competitor concerned (the natural object's aggregate time for the position it is in in real time) and the corresponding aggregate time for the competitor represented by the synthetic object when it was in approximately the same position. In the same way as for the distance in space, it will be possible to display the distance in time together with the synthetic object.
If the position which is to be displayed in the form of a synthetic object is located outside the display surface for the video sequence concerned, it is even more appropriate to illustrate the distance between the two positions in the form of an indication of time and/or distance. Furthermore, it is possible to generate a graphic illustration of this distance, for example in the form of a line or a column. By this means it is also possible to generate a comparison of more than two objects, e.g. a plurality of competitors in a race. A possible example of such a comparison is illustrated in figure 6d, where on the basis of the position of a competitor 30 a bar chart 34 is generated illustrating the distance to other competitors. The distance may be defined as distance in time when they were in the same position, or the distance in space when they had a corresponding time. The illustrated distance may also be the actual distance at the relevant time back to the other competitors if real time positions exist for them, or possibly the distance back in time to when the competitor concerned, if he/she is leading the race, was in the respective positions in which the other competitors are at the moment.
Figure 6e illustrates an example of a graphic representation 35 of a development over time, for example of a long distance race in track and field or a cross-country skiing race, generated by means of the present invention. The distance between two or more competitors can be calculated as described above for a desired number of positions or times in the race, in principle for each individual stored position, and this may be illustrated, for example in the form of curves. These curves may be normalised with regard to one competitor or a pre-issued schedule represented by a straight line, and the competitors' distance to this competitor or this schedule will be illustrated as curves located above or below this straight line. The example in figure 6e illustrates time differences at various positions in the race, but it will of course also be possible to show distance differences at various times during the race. Other information which can be calculated by means of the stored relationships between position and time will include speed, average speed over a given interval, change in own speed at different periods, for example early and late in a race, etc. In principle, the possibilities of compiling statistics are limited only by the producer's wishes and imagination. It will be possible to calculate all of these statistics and values very rapidly since the software necessary for performing the calculations will already exist in the production equipment. It will therefore be possible to display the results of such calculations while the sports arrangement is in progress, in principle in real time.
In conclusion, it is of course also possible to generate completely synthetic video sequences which are only based on the positions which are stored for the respective natural objects. This could be suitable after the conclusion of a race for showing the development of the race as it would appear if all the competitors started at the same time and not at given time intervals, or it may be desirable to generate synthetic pictures of how a situation would have appeared from a position where there is no camera. This is something which is often desirable, for example, in connection with analysing the scoring of a goal in football.

Claims

Patent Claims
1. A method in connection with a video production for registering sequences of data representing positions for one or more natural objects, characterized in that for each such object it comprises the steps of: - detecting from at least one basic position the direction towards the natural object or from at least two basic positions detecting the distance to the natural object, at a set of points in time within the defined period, where the basic position or basic positions represent known positions in a selected coordinate system, - if the distance or the direction is detected from two or more basic positions, calculating a position in the selected coordinate system for each time in the set of times, or, if the direction is detected from only one basic position, calculating a direction vector which represents the position by indicating the direction from said basic position towards the natural object for each point in time in the set of points in time, and
- storing the calculated positions or said calculated directions for the natural object as sequences of data representing said positions or directions at each point in time in the set of points in time.
2. A method according to claim 1 , characterized in that only one basic position is employed, that as a detector a video camera (K,) is used which for each point in time in the set of points in time registers a video image and which is equipped with devices (18, 19) for registering the camera's direction (panning and tilt) and use of zoom for each registered video image, that based on a registered camera direction linked to a registered video image, a camera vector (KV) is determined which defines the camera's optical axis, that a position in the display surface is determined for the object (19) to which a direction is to be determined, that a point of intersection is determined between the picture's outer edge and the straight line from the centre of the picture through the object's (10) position in the display surface, that based on this point of intersection and the registered use of zoom, a zoom vector (ZV) is determined, and that based on the respective distances between the centre of the display surface, the object's (10) position in the display surface and said point of intersection in the picture's outer edge, a direction vector (RV) is defined which is located between the camera vector and the zoom vector and which defines the direction from the basic position to the object (10).
3. A method according to claim 1, characterized in that at least two basic positions are employed, that as position detectors video cameras (K, , K2) are used, which register a video image for each time in the set of times and which are equipped with devices ( 18, 19) for registering the camera's direction (panning and tilt) and use of zoom for each registered video image, that based on the registered camera directions linked to simultaneously registered video images, respective camera vectors (KV,, KV2) are determined which define the cameras' optical axis, that a position in the respective display surface is determined for the object (10) to which a direction is to be determined from each basic position, that for each registered picture a point of intersection is determined between the picture's outer edge and the straight line from the centre of the picture through the object's position in the display surface, that based on these points of intersection and the registered use of zoom in the respective cameras, a zoom vector (ZV, , ZV2) is determined for each of the cameras (K,, K2), that based on the respective distances between the centre of the display surface, the object's position in the display surface and said point of intersection in the picture's outer edge, for each camera (K,, K2) a direction vector (RV,, RV2) is determined which is located between the camera vector (KV, , KV2) and the zoom vector (ZV, , ZV2) and which defines the direction from the respective basic positions to the object (10), and that based on the respective direction vectors (RV, , RV2) and basic positions, a position is calculated for the object ( 10) in the preselected coordinate system.
4. A method according to claim 1 , characterized in that at least two basic positions are employed, that as position detectors there are used devices ( 1 , 2, 3) for transmitting and receiving electromagnetic signals, that at each natural object (10) whose position is to be detected, a transponder is provided, that by means of a registered time lapse from the time a polling signal is transmitted from a position detector until a reflected response signal from said transponder is received at the same position detector, a distance is calculated from the position detector to the object, that by means of such calculated distances from each of the position detectors to the object, for each point in time in the set of points in time, a position is determined for the object in the selected coordinate system.
5. A method accordin g to claim 4, characterized in that SAW chips (20) are employed as transponders, each such chip being designed _o transmit a unique response signal which enables the natural object to be identified.
6. A method for, based on a collection of stored data representing either calculated positions for one or more natural objects in a selected coordinate system or calculated directions from a known camera position to the position of one or more natural objects, generating a synthetic or composite video sequence comprising a synthetic representation of the position of one or more natural objects or of values which are calculated by means of said stored data for one or more natural objects, characterized in that for each such object it comprises the steps of:
- selecting a sequence of data from the collection of stored data, where the selected sequence represents calculated positions or directions for said object at defined times,
- converting each of the calculated positions into values which can be represented in a video sequence, and
- subsequently generating a synthetic or composite video sequence where the calculated values are represented graphically or alphanumerically.
7. A method according to claim 6, characterized in that if the collection of stored data represents calculated positions, for each position in the selected sequences of data corresponding directions are calculated from the position of a selected video camera to the respective calculated positions, or if the collection of stored data represents calculated directions, the data are selected which represent directions which are calculated from the camera position of a selected video camera; that for each natural object a sequence of data is selected which represents a time interval corresponding to the time interval for a video sequence filmed by said camera; that said conversion of the calculated or selected directions is a conversion to positions in the picture plane by means of calculations which take into consideration registered camera directions (panning and tilt) and zoom angles related to the respective video images in said video sequence; that for calculated positions in the picture plane which are located within the display surface, synthetic objects are generated representing these positions, that for calculated positions in the picture plane which are located outside the display surface, synthetic objects are generated indicating the direction from the centre of the display surface towards the calculated position, and that a composite video sequence is thereby generated.
8. A method according to claim 7, characterized in that the respective positions and directions are indicated in relation to a selected coordinate system, that the respective directions representing the direction from a camera position to the position of a natural object are expressed as object vectors (OV), that the respective camera directions are expressed as camera vectors (KV) and that the respective zoom angles are expressed by means of zoom vectors (ZV), where the zoom vector (ZV) is defined as a vector which is located in the same plane as the object vector (OV) and the camera vector (KV) and which passes through a point which is located on the video picture's (I) outer edge (fig. 2).
9. A method according to claim 6, characterized in that the collection of stored data represents positions, that for each natural object a sequence of data is selected representing a time interval corresponding to the time interval for a video sequence, that for each picture in the video sequence and for each of the natural objects a value is calculated representing the distance between said natural objects and a corresponding calculated position for a corresponding natural object shown in said video sequence, either as the rectilinear distance between the respective positions, as the distance along a defined track, or as a time difference which is derived from time information related to the respective positions, and that from said video sequence a composite video sequence is generated where the respective distances are shown as graphic or alphanumeric symbols.
10. A method according to claim 6, characterized in that the collection of stored data represents positions, that one of the selected stored sequences or a corresponding predefined sequence of data is selected as reference, that each of the remaining selected sequences of data is converted to sequences of data representing differences relative to the reference sequence, where the differences are differences in time if the sequences correspond in space and the differences are differences in distance if the sequences correspond in time, and a synthetic video sequence is then generated which represents the calculated differences graphically or alphanumerically.
1 1. A method according to claim 6, characterized in that the collection of registered data represents positions, that corresponding sequences are selected for a set of natural objects, that the calculated positions are converted into coordinates in a selected coordinate system, that a composite sequence of the selected positions is generated, and that based on this sequence a synthetic video sequence is generated where all the positions are represented graphically, viewed from any point of reference in the selected coordinate system.
12. A system in connection with a video production for registering sequences of data representing positions for one or more natural objects, characterized in that it comprises
- at least one detector designed to be able to detect the direction from the respective detector towards a natural object or at least two detectors for detecting the distance from the position of the respective detector to a natural object,
- equipment for registering the time of each registration of direction or position, - equipment for calculating on the basis of data from the detectors a position for said object in a selected coordinate system or a vector in a selected coordinate system representing the direction from a basic position towards the object, - a memory for storing calculated positions or calculated directions for natural objects together with points in time of registering the data on the basis of which the positions or directions are calculated, and
- communication connections between the respective units in the system.
13. A system according to claim 12, characterized in that each detector is a video camera (K,, K2) which is provided with devices (18, 19) for registering the camera's direction (panning and tilt) and use of zoom for each registered video image.
14. A system according to claim 12, characterized in that each detector is a device (1 , 2, 3) for transmitting and receiving electromagnetic signals, and that the system further comprises a transponder provided on each natural object (10) whose position is to be detected.
15. A system according to claim 14, characterized in that said transponders are SAW chips (20), where each such chip is designed to transmit a unique response signal which enables the natural object to be identified.
16. A system for, based on a collection of stored data representing either calculated positions for one or more natural objects in a selected coordinate system or calculated directions from a known camera position to the position of one or more natural objects, generating a synthetic or composite video sequence comprising a synthetic representation of the position of one or more natural objects or of values which are calculated by means of said stored data for one or more natural objects, characterized in that it comprises - a memory where said data are stored,
- a computing unit connected to the memory and designed to be able to, on the basis of data retrieved from the memory, calculate values which can be represented in a video sequence, and
- video production equipment connected to the calculating unit and designed to be able to generate a graphic or alphanumeric representation of the values received from the calculating unit.
17. A system according to claim 16, characterized in that the calculating unit comprises means for, on the basis of time information which is stored together with said data, selecting sequences of data from said memory representing a desired time interval; means for, on the basis of data representing positions in a coordinate system, being able to calculate a direction from a reference point in the coordinate system to the positions which are represented by said data, or means for selecting from among the data stored in the memory, data representing directions which radiate from a selected reference point; and means for calculating the coordinates of a point of intersection between a line which passes through the reference point in said direction and a selected plane which is located perpendicularly to a reference direction which also passes through said reference point.
18. A system according to claim 16, characterized in that the video production equipment comprises means for receiving from the computing unit data representing coordinates in a picture plane, means for generating a synthetic object in a video picture in a position in the picture corresponding to the received coordinates or, if said data represent a position which is located outside the frames of a video picture, for generating a synthetic object which indicates the direction from the centre of the picture towards said position, and means for, if so desired, combining the generated video image with a natural video image in order to form a composite picture.
19. A system according to claim 16, characterized in that the data which are stored in the memory represent positions with associated time reference, and that the calculating unit comprises means for selecting from the memory sequences of data representing a desired time interval, means for calculating differences between data from different sequences of selected data or differences between data from the selected sequences and data which the computing unit is designed to receive from a system for registering such data, where the difference is calculated as a difference in position for data which correspond in time or as a difference in time for data which correspond in position.
20. A system according to claim 16, characterized in that the video production equipment comprises means for receiving from the calculating unit data representing differences between different positions or different times and for generating a graphic or alphanumeric representation of these differences in a video image.
PCT/NO2000/000420 1999-12-10 2000-12-07 Indicating positional information in a video sequence WO2001043427A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU17439/01A AU1743901A (en) 1999-12-10 2000-12-07 Indicating positional information in a video sequence

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NO19996140 1999-12-10
NO19996140A NO996140L (en) 1999-12-10 1999-12-10 Method and system for capturing locations and generating video images where these positions are represented

Publications (1)

Publication Number Publication Date
WO2001043427A1 true WO2001043427A1 (en) 2001-06-14

Family

ID=19904094

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NO2000/000420 WO2001043427A1 (en) 1999-12-10 2000-12-07 Indicating positional information in a video sequence

Country Status (3)

Country Link
AU (1) AU1743901A (en)
NO (1) NO996140L (en)
WO (1) WO2001043427A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007072115A1 (en) * 2005-12-21 2007-06-28 Andrea Lupini The control system of the determination of an offside position in the game of soccer and the position of moving objects in sports

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997002699A1 (en) * 1995-06-30 1997-01-23 Fox Sports Productions, Inc. A system for enhancing the television presentation of an object at a sporting event
WO2000031560A2 (en) * 1998-11-20 2000-06-02 Aman James A Multiple object tracking system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997002699A1 (en) * 1995-06-30 1997-01-23 Fox Sports Productions, Inc. A system for enhancing the television presentation of an object at a sporting event
WO2000031560A2 (en) * 1998-11-20 2000-06-02 Aman James A Multiple object tracking system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007072115A1 (en) * 2005-12-21 2007-06-28 Andrea Lupini The control system of the determination of an offside position in the game of soccer and the position of moving objects in sports

Also Published As

Publication number Publication date
AU1743901A (en) 2001-06-18
NO996140L (en) 2001-06-11
NO996140D0 (en) 1999-12-10

Similar Documents

Publication Publication Date Title
EP0835584B1 (en) A system for enhancing the television presentation of an object at a sporting event
US6154250A (en) System for enhancing the television presentation of an object at a sporting event
EP0894400B1 (en) Method and system for manipulation of objects in a television picture
US6707487B1 (en) Method for representing real-time motion
US5953077A (en) System for displaying an object that is not visible to a camera
EP1010129B1 (en) Re-registering a sensor during live recording of an event
US8675021B2 (en) Coordination and combination of video sequences with spatial and temporal normalization
US6304665B1 (en) System for determining the end of a path for a moving object
US6380933B1 (en) Graphical video system
US6567116B1 (en) Multiple object tracking system
US20080068463A1 (en) system and method for graphically enhancing the visibility of an object/person in broadcasting
WO1998032094A9 (en) A system for re-registering a sensor during a live event
US6824480B2 (en) Method and apparatus for location of objects, and application to real time display of the position of players, equipment and officials during a sporting event
KR20010008367A (en) Pitching practice apparatus, pitching analysis method with the same, and method of performing on-line/off-line based baseball game by using pitching information from the same
US20050159252A1 (en) System providing location information in a sports game
WO2001043427A1 (en) Indicating positional information in a video sequence
Li et al. Real-Time Ski Jumping Trajectory Reconstruction and Motion Analysis Using the Integration of UWB and IMU
CA2559783A1 (en) A system and method for graphically enhancing the visibility of an object/person in broadcasting
EP1333893A2 (en) A method of determining pressure index
MXPA01005100A (en) Multiple object tracking system
KR20020008382A (en) Reappearance system for the track of the pitched ball

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP