[go: up one dir, main page]

WO2006030444A2 - Imaging based identification and positioning system - Google Patents

Imaging based identification and positioning system Download PDF

Info

Publication number
WO2006030444A2
WO2006030444A2 PCT/IL2005/000998 IL2005000998W WO2006030444A2 WO 2006030444 A2 WO2006030444 A2 WO 2006030444A2 IL 2005000998 W IL2005000998 W IL 2005000998W WO 2006030444 A2 WO2006030444 A2 WO 2006030444A2
Authority
WO
WIPO (PCT)
Prior art keywords
camera
tag
tracking
volume
identifying
Prior art date
Application number
PCT/IL2005/000998
Other languages
French (fr)
Other versions
WO2006030444A3 (en
Inventor
Amit Stekel
Original Assignee
Raycode Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Raycode Ltd. filed Critical Raycode Ltd.
Publication of WO2006030444A2 publication Critical patent/WO2006030444A2/en
Publication of WO2006030444A3 publication Critical patent/WO2006030444A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves

Definitions

  • the present invention relates to the field of Imaging based Identification and Positioning Systems, especially for use indoors, in determining the identity and position of objects by means of an imaging system and an optical identity tag carried on the object.
  • IPS Indoor Positioning System
  • Radio wave based methods in particular those that use active tags, generally excel in their area coverage capabilities and their non-line-of-sight characteristics. On the other hand, in the vicinity of metals and water, their performance may be degraded by interference noise. Furthermore, their positioning accuracy is variable, mostly of the order of 5 meters, and there are only few vendors that supply systems that give close to one-meter accuracy. In addition, some RF-based vendors, particularly those who base their systems on Bluetooth, do not provide identification functionality, and others may have varying precision of identification.
  • Ultrasonic methods are mostly based on time-of-flight measurements, the difference from radio methods being that the radiation velocity is a million times slower and thus, the time-of-flight much higher and measured in milliseconds instead of nanoseconds. This leads to much higher position accuracy, typically of a few centimeters.
  • the area covered may be limited, however, usually to size of a room.
  • Active based technologies such as RF-based, RFID, IR and ultrasound, may have maintenance cost problems with battery operation, as in some cases; their continuous operation necessitates frequent battery replacement.
  • optical systems like scene analysis and IR, excel in position accuracy, continuous operation and the low cost of tags and readers. Their identification precision is good (particularly in scene analysis) but there are vendors that use IR based approaches that do not have identification capabilities.
  • CCTV control systems also relate to the field of imaging based tracking. These systems are designed to help human operators to track specific activity, be it personnel, customers, intruders etc. These systems have evolved over the years, adopting computer vision techniques in order to enhance the overall system performance and quality, and to save human labor. Yet the current practice of these systems is generally limited to low level image understanding such as "video motion detection” or VMD, designed to help human operators to focus on the most important events.
  • VMD video motion detection
  • the present invention attempts to overcome the difficulties associated with prior art systems, as outlined in the background section, by providing a novel method and apparatus for identifying and tracking objects such as people, vehicles, carts etc., within closed spaces.
  • the system may preferably comprise an identifying tag affixed to an object, and apparatus and techniques for automatically reading the tag information and the position vector and its derivatives, e.g. the tag velocity vector and acceleration.
  • the system preferably comprises separate identification and tracking units and an optical tag unit.
  • the system generally comprises imaging devices and optional light sources, coaxially disposed with the imaging devices, and also preferably, a retroreflective tag attached to the moving object to be identified and tracked.
  • the system differs from the prior art systems described above, in that it is based on a passive tag, yet it offers remote identification and positioning capabilities. This ensures a cost effective solution that is reliable, highly accurate and remotely operated.
  • a system with both high identification performance and large area of coverage by using two sets of cameras; one set optimized for the tracking function using a large field of view and comparatively low identification resolution, and the second set optimized for tag identification using comparatively high resolution and a small field of view.
  • the tracking camera or cameras are disposed throughout the area where tracking of the objects is to be performed, while the identification cameras, also known as readers, are positioned to identify tags in restricted regions of the total space where the tag regularly passes, such as around doors, in corridors, etc. Once the identification has been performed, the tracking camera keeps track of the tag position.
  • a system that can be used in poor lighting conditions, utilizing a retroreflective tag that, together with an active illumination with monochromatic light and a suitable filtered imaging device, can suppress spurious light sources and enhance the tag reflective light.
  • the present invention provides for a method to correct for the optical distortion of the camera lens utilizing direct measurement of the camera optical distortion.
  • the present invention provides for three-dimensional measurement of the tag position using the tag distance from the camera as a third measurement, in addition to the two local coordinates measured in the camera image.
  • the tag distance is measured using its features such as size or brightness and a priori calibration data of these features in relation to the tag distance.
  • An alternative option to measure the tag distance to the camera is by using a two-camera simultaneous position measurement of the tag, and triangulation techniques, as known in the art.
  • the present invention provides a maintenance free and low-cost optical tag that use retroreflective means to reflect and modulate light originating at the reader, back to the reader's imaging device, without the need for an internal source of energy on the tag or object.
  • the present invention allows for simultaneous identification and position vector measurements of multiple tagged moving objects using tag enhanced features identification and tracking algorithms as will become apparent from the detailed description of the system.
  • covert operation using light in the infrared region there is provided covert operation using light in the infrared region.
  • the tag can be detected only from the reader and no light is scattered in other directions.
  • the present invention provides for scene understanding using the system's identification and positioning signal, carried through the video together with image understanding algorithms, as will become apparent from the detailed description of the system algorithm.
  • the present invention provides for zone surveillance using the system's identification and positioning signal, carried through the video together with image understanding algorithms as will become apparent from the detailed description of the system algorithm.
  • the present invention may be used to upgrade standard video networks, or CCTV installations, by offering additional software and hardware, such as video server, passive optical tags, coaxial illumination and identification cameras to identify and position tags coming from various local cameras into a global set of tracks described upon a common site map, usually indoors, so that the global picture of tracked objects can be grasped from the fragmented local images coming from the video network.
  • additional software and hardware such as video server, passive optical tags, coaxial illumination and identification cameras to identify and position tags coming from various local cameras into a global set of tracks described upon a common site map, usually indoors, so that the global picture of tracked objects can be grasped from the fragmented local images coming from the video network.
  • a network of separate cameras can collaborate to form a unified system for indoor identification and position determination of tagged objects.
  • the volume may be a zone under surveillance, and at least part of the volume is preferably located adjacent to an access opening to said zone, such as an entrance door, or in a busy part of said zone, such as in a corridor.
  • a method for tracking within a volume an object having identifying information comprising the steps of:
  • the identifying information may be a known feature of the object, or it may be coded within a tag. If coded in a tag, the tag may preferably comprise spatial information, in which case the resolution of the identifying cameras is spatial resolution, or it may preferably comprise chromatic information, in which case the resolution of the identifying cameras is chromatic resolution.
  • the above described methods may preferably also comprise the step of illuminating at least the part of the volume.
  • the tag is preferably such as to enhance its optical contrast against the background.
  • the illuminating is performed along the optical axis of the identifying camera, and the optical contrast is enhanced by use of a retroreflector which reflects illumination back essentially along the optical axis of the identifying camera.
  • the at least part of the volume may preferably be all of the volume, in which case the step of illuminating is also preferably performed along the optical axes of the at least one tracking camera, and the optical contrast is enhanced by use of a retroreflector which reflects the illumination back essentially along the optical axis of the at least one tracking camera.
  • the identification camera may preferably use an imaging aperture smaller than that of the at least one tracking camera, or an exposure time shorter than that of the at least one tracking camera
  • the illuminating may preferably be performed in the IR band.
  • the tag in any of the above described methods using a tag, may be a passive tag or an active tag.
  • the known feature of the object may preferably be the tag.
  • the volume is a zone under surveillance, and the at least part of the volume is located adjacent to an entrance to the zone.
  • a method as described above and wherein the object has a user associated therewith, the method also comprising the steps of (i) tracking the user by means of video scene analysis algorithms, such that the user can also be tracked when distant from the object, and (ii) tracking the user by tracking the object once the user becomes re-associated with the object.
  • a system for tracking within a volume an object having identifying information comprising:
  • At least one tracking camera viewing the volume, the at least one tracking camera having a first resolution sufficient to track the position of the object within the volume, (ii) a signal processor utilizing images of the object from the at least one tracking camera to track the position of the object in the volume, and
  • an identification camera viewing a selected part of the volume, the identification camera having a higher resolution than that of the at least one tracking camera, the identification camera identifying the information and determining the position of the object within the selected part of the volume, wherein the signal processor also correlates the position of the object within the part of the volume determined by the identification camera with its position determined by the at least one tracking camera, such that the at least one tracking camera also acquires the identifying information.
  • the identifying information may be a known feature of the object, or it may be coded within a tag. If coded in a tag, the tag may preferably comprise spatial information, in which case the resolution of the identifying cameras is spatial resolution, or it may preferably comprise chromatic information, in which case the resolution of the identifying cameras is chromatic resolution.
  • the above described system may preferably also comprise a source for illuminating at least the part of the volume, hi such a case, the tag is preferably such as to enhance its optical contrast against the background.
  • the illuminating source is directed along the optical axis of the identifying camera, and the optical contrast is enhanced by use of a retroreflector which reflects illumination back essentially along the optical axis of the identifying camera.
  • the at least part of the volume may preferably be all of the volume, in which case the system also preferably comprises at least one more illuminating source directed along the optical axes of the at least one tracking camera, and the optical contrast is preferably enhanced by use of a retroreflector which reflects the illumination back essentially along the optical axis of the at least one tracking camera.
  • the identification camera may preferably use an imaging aperture smaller than that of the at least one tracking camera, or an exposure time shorter than that of the at least one tracking camera.
  • the source may preferably be an IR source.
  • the tag in any of the above described systems using a tag, may be a passive tag or an active tag. Furthermore, the tag may be the known feature of the object
  • the volume may be a zone under surveillance, and the at least part of the volume is preferably located adjacent to an entrance to the zone.
  • a system as described above and wherein the object has a user associated therewith, such that the system tracks the user when close to the object, the system also comprising video analysis algorithms, utilizing the at least one tracking camera and an identification camera, for tracking the user when distant from the object.
  • a method of determining the coordinates in three dimensions of the position in a volume of an object, an image of the object having a feature having characteristics which are dependent on the distance of the object from an imaging camera comprising the steps of:
  • the feature is preferably a known dimension of the object
  • the step of determining the distance of the object from the camera is preferably performed at least by comparing the measured size of the image of the known dimension with the true known dimension.
  • the feature is the preferably the brightness of the object
  • the step of determining the distance of the object from the camera is preferably performed at least by comparing the brightness with known brightness's predetermined from images taken at different distances.
  • FIG. 1 A schematic illustration of an embodiment of the system of imagers, in accordance with a preferred embodiment of the present invention
  • Fig. 2 A schematic illustration of an embodiment of the tag reader and tag tracker, in accordance with a preferred embodiment of the present invention
  • FIG. 3 A schematic illustration of the operation of the tag, in accordance with a preferred embodiment of the present invention.
  • Fig. 4 A schematic illustration of the global camera calibration consisting of its global position and orientation in accordance with a preferred embodiment of the present invention
  • Fig. 5 A schematic illustration of the camera optics calibration including the direction angles corresponding to the imager local image positions, in accordance with a preferred embodiment of the present invention
  • Fig. 6 A schematic illustration of the tag imaging calibration in accordance with a preferred embodiment of the present invention.
  • Fig. 7 A schematic illustration of the real time global position measurement in accordance with a preferred embodiment of the present invention.
  • Fig. 8 A schematic illustration of an embodiment of the spatio-colored tag, in accordance with a preferred embodiment of the present invention.
  • Fig. 9 A schematic illustration of an embodiment of the spatio-colored tag, in accordance with an optional embodiment of the present invention.
  • Fig. 10 A schematic illustration of an embodiment of the infrared imager subsystem, in accordance with an optional embodiment of the present invention.
  • Fig. 1 shows a schematic layout of the system of the present invention comprising a set of imagers that optionally are mounted on the ceiling of an indoor space 26 that needs to be monitored.
  • the imagers have various imaging parameters and may preferably have light sources associated with them.
  • the system shown in Fig. 1 preferably comprises tracking imagers, known hereinafter as trackers, 10, 11, 12, 13, that image the entire monitored area, 26, through the tracked areas 20, 21, 22, 23 respectively, and identification imagers, known hereinafter as readers, 14 and 15, that image the areas 24 and 25 respectively, which are located near the entrance or exit openings to the space.
  • the readers have higher resolution than the trackers to enable them to identify the object to be tracked.
  • An object such as a person, a cart, or similar, having a tag, 40, is typically moving along the path, 41.
  • the tag signal is also detected in the tracker, 10, and although the limited resolution of the tracker makes it generally difficult to accurately read the identity of the tag, its identity is verified indirectly by coordinating the tag positions obtained separately from the reader camera 14 and the tracker camera 10, in a common coordinate system of the monitored site.
  • the tracker may be able to support the location identification by being able to recognize at least some features of the tag or object, such as its overall size, shape, color, or similar. This usefulness of this aspect of the tracker's properties will be described hereinbelow.
  • the tag is further tracked by a neighboring tracker, 13, as it passes into its field of view, 23.
  • Each tracker further transforms the local camera coordinates of the tag to the global site coordinates, thus allowing for coordination between all the trackers and reader or readers.
  • the above arrangement of the tracking system ensures both high identification performance and large tracking field coverage, by providing the readers and the trackers with separate imaging conditions, each set of cameras using a suitable set of parameters for its particular task, reading or tracking.
  • the reader camera provides definitive identification and can track tags or objects in their limited area coverage, allowing them to track the positions and thus transfer the data;
  • the trackers on the other hand, track the tags in their large area of coverage and preferably have some limited recognition capability to allow them to lock on the tracked tag more efficiently, as will be explained below.
  • the tag 40 position is tracked by means of a sequential series of images grabbed on all of the cameras, using its path features, 41, including at least its position, and preferably also some of the position derivatives, such as its velocity and direction, acceleration etc., and also preferably using at least some of its recognized image features, such as the tag size, color, length etc.
  • This data is accumulated to form a statistical description of the tag track.
  • the position-based information and its derivatives are used to estimate its future expected spatio-temporal dependent path, 42, and specifically, the next position 50 and region of interest, 51 that is expected at the time of the next grabbed image.
  • the region of interest 51 is the region of position uncertainty around the estimated position, 50, and is the region where the tag is searched for in the next frame.
  • Tracking based only on predicted position and expected behavior may be susceptible to error, if the object makes unusual maneuvers, or if two different objects come close to each other, or if the environment has a high level of background optical noise. In such cases, position tracking alone may lose track of the correct object, and provide false information. Support information provided by even a rudimentary level of recognition, then provides additional information to the trackers in situations where the position tracking may be susceptible to error.
  • Each object is tracked using its calculated global coordinates. For each successive grabbed image instant, its path in this global space is translated to local space coordinates of each camera, and a region of interest (ROI) around its next expected position is calculated for each camera. ROI's that are located within the frame of each camera are then searched for the detection of the tagged objects. Should an object be detected in some of these ROI's, its presence is confirmed preferably using feature extraction from the detected segments in these ROI's, and these features are compared to known features of these tagged objects, by the known methods of image processing. Any match then causes an accumulation of the featured segment, translated to global coordinates, to the tracked object statistics.
  • ROI region of interest
  • a model is then fitted to the object statistical data history to help in estimating its future path.
  • the estimated position of the object in the next image is the center of the ROI, and the estimation uncertainties correspond to the ROI size: the larger the uncertainty, the larger the ROI, and the searching region for the tag gets larger.
  • Image acquisition switching logic based on ROI within images, is used to decide which of the appropriate cameras should be grabbed. Using this logic to utilize only those cameras that image the existing objects within the monitored area, and not the cameras that apparently do not image anything of interest, enables efficient usage of cameras and a decrease of processing bandwidth, as not all the cameras are being grabbed at the same time.
  • the invention is generally described herein using a coded tag, mounted on the object to be tracked, it is to be understood that the invention can equally well be implemented using any other identifying information obtained from the object, such as its size, a predetermined geometrical component, or any other feature that can be used for identification of the object using the methods of image recognition, as are known in the art.
  • the invention is generally described herein using an illumination source coaxially mounted with the camera, and an optional retroreflector mounted within the tag, to ensure good visibility and contrast of the tag or object features, it is to be understood that the invention can equally well be implemented using the ambient light and without any retroreflection means, if the camera sensitivity and the ambient light conditions so permit.
  • FIG. 2 is a schematic layout of the construction of a tracker or reader.
  • Each tracker or reader, 30, is comprised of an imager 31, imaging optics, 32 and also optionally, a coaxial light source, 33 that is preferably arranged in a ring around the imaging optics lens 32.
  • Light coming out of the source, 33 is scattered to illuminate all the field of view, 37.
  • Rays, 34 are in the direction of the tag, 40, residing within the imaged field of view.
  • Fig. 3 shows a schematic drawing of the illumination of the tag of the present invention, the tag preferably comprising a retro- reflector.
  • the tag structure is described in more detail in the embodiments of Figs. 9 and 10 hereinbelow, but its information can be spatially coded, chromatically coded, or both, in the form of a two dimensional color pattern.
  • the use of color can increase the tag data capacity and decrease its geometrical overall size.
  • the reader should have a spatio-chromatical resolution sufficient to discern the tag pattern.
  • One common example of a tag is the use of a black & white tag like a barcode and a black & white camera as a reader.
  • the beams, 34 are retro-reflected back, in a specularly scattered pattern around directions 34, to form a beam around the central beams, 35. As shown in Fig. 2, this beam is in the direction of the center of light source, 33, and is aligned with the reader's imaging optics aperture, 32.
  • the identification reader imaging parameters are preferably selected to optimize the light contrast between the tag brightness and the generally diffuse light brightness of the background, to enhance the tag delectability and to reduce the background noise. This objective is achieved by keeping the reader's aperture small, to decrease the background brightness as much as possible, leaving the higher intensity tag to be imaged and digitized within the reader's imager, 31. This option is preferable in applications where tag speed is low and its distance from the reader may vary over a large range, so that the small optical aperture provides the high depth of field needed for imaging the tag position without evident lack of focus, and the long exposures are adequate for capturing without evident motion blur.
  • the tracker imaging parameters are selected to get a normally exposed image of the background and saturated light of the tag, by opening the optics aperture 32 normally.
  • the image formed in this way allows for tag tracking, using its saturated intensity as a tracking feature, and general image analysis, as known in the art.
  • the coordination of multiple cameras necessitates the use of a system of common global site coordinates, such that the local image coordinates, Pi(Xi 5 Yi) of each camera have to be transformed to these global site coordinates, P(X 5 Y 5 Z), and vise versa.
  • This invention provides for a system and method of using three calibrations: 1. The camera installation calibration; 2. The camera optics calibration; 3. The tag imaging calibration. These calibrations together with the real time measurement data is used to get the coordinate transformed data. To facilitate 3 dimensional global coordinates estimation, a third measurement needs to be added to the two local measurements; this measurement is the tag distance from the camera.
  • the camera calibration initially the camera is calibrated to find its global model parameters, e.g., 3 position coordinates P C (X C ,Y C ,Z C ) and 3 rotation angles (R x5 Ry 1 R z ) using methods as known in the art (for instance, page 67 of the book “Digital image processing" by R. C. Gonzalez, R. E. Woods. Addison-Wesly, September, 1993).
  • Fig. 4 illustrates the calibration of the camera global position and the camera pan and tilt.
  • the global calibration point, 40a is viewed perpendicularly to the global X direction, 64. This point is selected such that its camera local image counterpart lies in the image center, thus it also lies on the camera optical axis, 61.
  • the camera tilt, 63 is given by the angle between the camera optical axis, 61 and the camera plummet, 62. It is measured using the known global points, 40a and 60. This procedure is repeated with the camera pan.
  • Fig. 5 describes the method of correlating camera local image positions and their corresponding direction angles relative to the camera optical axis. This is done by measuring the relation between the image of a calibration point, 40a, that has a local camera location, 43, lying along a radial ray originating from the camera image center and their consequent global direction angles, 39, as measured between the rays, 35, in the direction of the calibration point, 40a, and the camera optical axis, 38.
  • the tag can preferably be made in the shape of a sphere. This provides the advantage that its image is independent of its orientation, thereby simplifying the calibration procedure.
  • Fig. 7 illustrates the real-time measurement of a global 3D position.
  • the tag distance 72 is first measured using its distance dependent features. Once the tag distance has been estimated, the local image position of the tag, 43b, is used to estimate the global line, 71 between the tag 40b, at a distance 72 from the camera, and the camera located at position, 60. The equation of the global line is determined from the local position of the tagged image 43b, and the prior camera calibration as explained above. The tag global position, 40b on the line equation 72 is then found by fitting its measured distance, 72 into this line equation.
  • the global direction angles, 69, of the tag to be positioned, 40b is simply obtained from the local camera direction angles, 39, shown in fig. 4 and the camera tilt angle, 63.
  • Fig. 9 illustrate yet another option where the color layers are concentric. These are just examples of the spatial arrangement of the colored strips and many other arrangements are possible.
  • the reader uses coaxial illumination of the field of view.
  • the color-coded retro-reflective tag causes the tag reflection to be very bright, such that the reader can work with a very low F- number, darkening the background and emphasizing the colored-tag.
  • the system and methods of the present invention may be advantageously used within existing CCTV camera tracking networks, where the cameras are already installed and the central video server is linked to all of the cameras.
  • illumination units 33 as described in Fig. 2, and some additional readers in the inspected zone entrances, corridor and heavily used paths.
  • the bright reflectance of the tag can be used as an identified and positioned hooking point for any scene analysis functions; for example, the tag can be attached to a shopping cart that needs to be identified and positioned, so that the customer could be tracked without tagging him and thus invading his privacy. Any customer holding the cart could be recognized as the cart owner and further identification and tracking after that customer could be performed by tracking the cart.
  • a tracking algorithm for following the customer's movements by video scene analysis can be used.
  • the customer could get lost from the surveillance system as can often happen when the person goes behind another object or else mingles within the crowd.
  • the customer's path would then be lost completely from that point on.
  • the customer when the customer comes back to his cart and holds it again, he could then be recognized again as the cart owner and his track can be merged with the tagged cart track, such that his tracked path is regained.

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A system for tracking an object within an enclosed space (fig. 1), in which the object is first identified as it enters the space (fig. 1, 40) by means of a high resolution camera monitoring a limited area near the entrance (fig. 1, 10-13) and identifying the object (40) preferably by means of an information coded tag attached to the object. The reminder of the space including the entrance area is equipped with one or more tracking cameras having a lower resolution and generally unable to identify the object, which track the object through the entire space using known method of object tracking.

Description

IMAGING BASED IDENTIFICATION AND POSITIONING SYSTEM
FIELD OF THE INVENTION
The present invention relates to the field of Imaging based Identification and Positioning Systems, especially for use indoors, in determining the identity and position of objects by means of an imaging system and an optical identity tag carried on the object.
BACKGROUND OF THE INVENTION
Various systems are known in the prior art that address the problem of positioning and identifying tagged objects. These systems generally use radiation such as magnetic or radio frequency, optical and ultrasonic. Furthermore, Indoor Positioning System (IPS) technologies can be classified in view of the estimation methods they employ and can be classified accordingly: geometric, statistical, scene analysis and proximity based. Frequently, location aware systems combine these approaches to achieve higher accuracy or precision. Some of these systems have not received widespread acceptance because of excessive cost and insufficient reliability. Some of these methods have limited position accuracy, such as by giving a proximity indication that is only one bit of information regarding the tag position; if the tag is inside a predefined circle around the sensor, the sensor outputs a one, and if not, a zero. In some applications a proximity indication may suffice but in many others, it is too limiting.
Radio wave based methods, in particular those that use active tags, generally excel in their area coverage capabilities and their non-line-of-sight characteristics. On the other hand, in the vicinity of metals and water, their performance may be degraded by interference noise. Furthermore, their positioning accuracy is variable, mostly of the order of 5 meters, and there are only few vendors that supply systems that give close to one-meter accuracy. In addition, some RF-based vendors, particularly those who base their systems on Bluetooth, do not provide identification functionality, and others may have varying precision of identification.
Ultrasonic methods are mostly based on time-of-flight measurements, the difference from radio methods being that the radiation velocity is a million times slower and thus, the time-of-flight much higher and measured in milliseconds instead of nanoseconds. This leads to much higher position accuracy, typically of a few centimeters. The area covered may be limited, however, usually to size of a room.
Active based technologies, such as RF-based, RFID, IR and ultrasound, may have maintenance cost problems with battery operation, as in some cases; their continuous operation necessitates frequent battery replacement.
Generally, optical systems, like scene analysis and IR, excel in position accuracy, continuous operation and the low cost of tags and readers. Their identification precision is good (particularly in scene analysis) but there are vendors that use IR based approaches that do not have identification capabilities.
Many optical systems utilize camera position and orientation calibration techniques. Many such techniques have been devised that utilize field point measurements together with their local camera image counterparts to form the camera coordinates transformation. Yet there are two some troublesome issues that may for the accuracy of this procedure. One problem is that of optical distortion that has radial dependency, which may be particularly problematic in security applications where a wide field of view is utilized. The other is the fact that generally, the camera only measures two local image coordinates, such that only two of the three-dimensional (i.e. global) field point coordinates can be recovered, mandating either a priori knowledge of the value of the third coordinate, or else an additional two dimensional measurement from another camera, such as is done in triangulation measurements.
CCTV control systems also relate to the field of imaging based tracking. These systems are designed to help human operators to track specific activity, be it personnel, customers, intruders etc. These systems have evolved over the years, adopting computer vision techniques in order to enhance the overall system performance and quality, and to save human labor. Yet the current practice of these systems is generally limited to low level image understanding such as "video motion detection" or VMD, designed to help human operators to focus on the most important events. These solutions may not function well in activity-intense applications such casino houses, supermarkets, hospitals etc. because the level of understanding needed in order to automatically focus on specific events may be high and thus may severely limit the usability of these methods. SUMMARY OF THE INVENTION
The present invention attempts to overcome the difficulties associated with prior art systems, as outlined in the background section, by providing a novel method and apparatus for identifying and tracking objects such as people, vehicles, carts etc., within closed spaces. The system may preferably comprise an identifying tag affixed to an object, and apparatus and techniques for automatically reading the tag information and the position vector and its derivatives, e.g. the tag velocity vector and acceleration.
The system preferably comprises separate identification and tracking units and an optical tag unit. The system generally comprises imaging devices and optional light sources, coaxially disposed with the imaging devices, and also preferably, a retroreflective tag attached to the moving object to be identified and tracked. The system differs from the prior art systems described above, in that it is based on a passive tag, yet it offers remote identification and positioning capabilities. This ensures a cost effective solution that is reliable, highly accurate and remotely operated.
In accordance with a first preferred embodiment of the invention, there is provided a system with both high identification performance and large area of coverage by using two sets of cameras; one set optimized for the tracking function using a large field of view and comparatively low identification resolution, and the second set optimized for tag identification using comparatively high resolution and a small field of view. The tracking camera or cameras are disposed throughout the area where tracking of the objects is to be performed, while the identification cameras, also known as readers, are positioned to identify tags in restricted regions of the total space where the tag regularly passes, such as around doors, in corridors, etc. Once the identification has been performed, the tracking camera keeps track of the tag position.
In accordance with another aspect of the invention, there is provided a system that can be used in poor lighting conditions, utilizing a retroreflective tag that, together with an active illumination with monochromatic light and a suitable filtered imaging device, can suppress spurious light sources and enhance the tag reflective light.
In accordance with another aspect of the invention, the present invention provides for a method to correct for the optical distortion of the camera lens utilizing direct measurement of the camera optical distortion. In accordance with another aspect of the invention, the present invention provides for three-dimensional measurement of the tag position using the tag distance from the camera as a third measurement, in addition to the two local coordinates measured in the camera image. The tag distance is measured using its features such as size or brightness and a priori calibration data of these features in relation to the tag distance. An alternative option to measure the tag distance to the camera is by using a two-camera simultaneous position measurement of the tag, and triangulation techniques, as known in the art.
In accordance with another aspect of the invention, the present invention provides a maintenance free and low-cost optical tag that use retroreflective means to reflect and modulate light originating at the reader, back to the reader's imaging device, without the need for an internal source of energy on the tag or object.
In accordance with another aspect of the invention, the present invention allows for simultaneous identification and position vector measurements of multiple tagged moving objects using tag enhanced features identification and tracking algorithms as will become apparent from the detailed description of the system.
In accordance with another aspect of the invention, there is provided covert operation using light in the infrared region. In addition, as the method is based on retro reflected radiation, the tag can be detected only from the reader and no light is scattered in other directions.
In accordance with another aspect of the invention, there is provided a cost effective, thin and lightweight tag that can be affixed easily to various objects, as will become apparent from the detailed description of the construction and the operation of the optical tag reading apparatus.
In accordance with another aspect of the invention, the present invention provides for scene understanding using the system's identification and positioning signal, carried through the video together with image understanding algorithms, as will become apparent from the detailed description of the system algorithm.
In accordance with another aspect of the invention, the present invention provides for zone surveillance using the system's identification and positioning signal, carried through the video together with image understanding algorithms as will become apparent from the detailed description of the system algorithm. In accordance with another aspect of the invention, the present invention may be used to upgrade standard video networks, or CCTV installations, by offering additional software and hardware, such as video server, passive optical tags, coaxial illumination and identification cameras to identify and position tags coming from various local cameras into a global set of tracks described upon a common site map, usually indoors, so that the global picture of tracked objects can be grasped from the fragmented local images coming from the video network. Thus, using existing video networks as well as added software and hardware, a network of separate cameras can collaborate to form a unified system for indoor identification and position determination of tagged objects.
In any of the above mentioned methods the volume may be a zone under surveillance, and at least part of the volume is preferably located adjacent to an access opening to said zone, such as an entrance door, or in a busy part of said zone, such as in a corridor.
There is therefore provided in accordance with another preferred embodiment of the present invention, a method for tracking within a volume an object having identifying information, comprising the steps of:
(i) viewing the volume with at least one tracking camera having a first resolution sufficient to track the position of the object within the volume,
(ii) tracking the position of the object in the volume by means of signal processing of images of the at least one camera,
(iii) viewing a selected part of the volume with an identification camera having a higher resolution than that of the at least one tracking camera, and sufficient to identify the information,
(iv) identifying the information by means of signal processing images obtained by the identification camera and determining the position of the object within the part of the volume, and
(v) correlating the position of the object within the part of the volume determined by the identification camera with its position determined by the at least one tracking camera, such that the at least one tracking camera also acquires the identifying information.
In the above method, the identifying information may be a known feature of the object, or it may be coded within a tag. If coded in a tag, the tag may preferably comprise spatial information, in which case the resolution of the identifying cameras is spatial resolution, or it may preferably comprise chromatic information, in which case the resolution of the identifying cameras is chromatic resolution.
In accordance with yet another preferred embodiment of the present invention, the above described methods may preferably also comprise the step of illuminating at least the part of the volume. In such a case, the tag is preferably such as to enhance its optical contrast against the background. Additionally and preferably, the illuminating is performed along the optical axis of the identifying camera, and the optical contrast is enhanced by use of a retroreflector which reflects illumination back essentially along the optical axis of the identifying camera.
When illumination is used, the at least part of the volume may preferably be all of the volume, in which case the step of illuminating is also preferably performed along the optical axes of the at least one tracking camera, and the optical contrast is enhanced by use of a retroreflector which reflects the illumination back essentially along the optical axis of the at least one tracking camera. In such a case, the identification camera may preferably use an imaging aperture smaller than that of the at least one tracking camera, or an exposure time shorter than that of the at least one tracking camera
In accordance with still another preferred embodiment of the present invention in any of the above described methods using an illuminating step, the illuminating may preferably be performed in the IR band.
In accordance with still another preferred embodiment of the present invention, in any of the above described methods using a tag, the tag may be a passive tag or an active tag.
There is further provided in accordance with still another preferred embodiment of the present invention, a method as described above and wherein the position of the object is determined in three dimensions by the steps of:
(i) imaging the known feature of the object with one of the at least one cameras, and using the image to determine the distance of the object from the one camera, (ii) defining a sphere centered on the camera and having a radius equal to the distance of the object from the camera,
(iii) defining the direction of the object relative to the camera by means of a two dimensional image of the object, and (iv) determining the coordinates of the position of the object in three dimensions by the intersection of the direction with the sphere.
The known feature of the object may preferably be the tag.
In accordance with a further preferred embodiment of the present invention, there is also provided a method as described above and wherein the volume is a zone under surveillance, and the at least part of the volume is located adjacent to an entrance to the zone.
In accordance with another preferred embodiment of the present invention, there is also provided a method as described above, and wherein the object has a user associated therewith, the method also comprising the steps of (i) tracking the user by means of video scene analysis algorithms, such that the user can also be tracked when distant from the object, and (ii) tracking the user by tracking the object once the user becomes re-associated with the object.
There is also provided in accordance with yet a further preferred embodiment of the present invention, a system for tracking within a volume an object having identifying information, comprising:
(i) at least one tracking camera viewing the volume, the at least one tracking camera having a first resolution sufficient to track the position of the object within the volume, (ii) a signal processor utilizing images of the object from the at least one tracking camera to track the position of the object in the volume, and
(iii) an identification camera viewing a selected part of the volume, the identification camera having a higher resolution than that of the at least one tracking camera, the identification camera identifying the information and determining the position of the object within the selected part of the volume, wherein the signal processor also correlates the position of the object within the part of the volume determined by the identification camera with its position determined by the at least one tracking camera, such that the at least one tracking camera also acquires the identifying information.
In this system, the identifying information may be a known feature of the object, or it may be coded within a tag. If coded in a tag, the tag may preferably comprise spatial information, in which case the resolution of the identifying cameras is spatial resolution, or it may preferably comprise chromatic information, in which case the resolution of the identifying cameras is chromatic resolution.
In accordance with yet another preferred embodiment of the present invention, the above described system may preferably also comprise a source for illuminating at least the part of the volume, hi such a case, the tag is preferably such as to enhance its optical contrast against the background. Additionally and preferably, the illuminating source is directed along the optical axis of the identifying camera, and the optical contrast is enhanced by use of a retroreflector which reflects illumination back essentially along the optical axis of the identifying camera.
In the above described systems including an illuminating source, the at least part of the volume may preferably be all of the volume, in which case the system also preferably comprises at least one more illuminating source directed along the optical axes of the at least one tracking camera, and the optical contrast is preferably enhanced by use of a retroreflector which reflects the illumination back essentially along the optical axis of the at least one tracking camera. In such a case, the identification camera may preferably use an imaging aperture smaller than that of the at least one tracking camera, or an exposure time shorter than that of the at least one tracking camera.
In accordance with still another preferred embodiment of the present invention in any of the above described systems using an illuminating source, the source may preferably be an IR source.
In accordance with still another preferred embodiment of the present invention, in any of the above described systems using a tag, the tag may be a passive tag or an active tag. Furthermore, the tag may be the known feature of the object
Additionally and preferably, in any of the above described systems, the volume may be a zone under surveillance, and the at least part of the volume is preferably located adjacent to an entrance to the zone.
There is also provided in accordance with another preferred embodiment of the present invention, a system as described above, and wherein the object has a user associated therewith, such that the system tracks the user when close to the object, the system also comprising video analysis algorithms, utilizing the at least one tracking camera and an identification camera, for tracking the user when distant from the object. There is even further provided in accordance with another preferred embodiment of the present invention, a method of determining the coordinates in three dimensions of the position in a volume of an object, an image of the object having a feature having characteristics which are dependent on the distance of the object from an imaging camera, comprising the steps of:
(i) using an image of the feature to determine the distance of the object from the camera,
(ii) defining a sphere centered on the camera and having a radius equal to the distance of the object from the camera,
(iii) defining the direction of the object relative to the camera by means of a two dimensional image of the object, and
(iv) determining the coordinates of the object in three dimensions by the intersection of the direction on the sphere.
In the above described method, the feature is preferably a known dimension of the object, and the step of determining the distance of the object from the camera is preferably performed at least by comparing the measured size of the image of the known dimension with the true known dimension.
Alternatively and preferably, the feature is the preferably the brightness of the object, and the step of determining the distance of the object from the camera is preferably performed at least by comparing the brightness with known brightness's predetermined from images taken at different distances.
Other objects and advantages of this invention will become apparent as the description proceeds.
The disclosures of all publications mentioned in this section and in the other sections of the specification, and the disclosures of all documents cited in the above publications, are hereby incorporated by reference, each in its entirety.
BRIEF DESCRIPTION OF THE DRAWINGS
Non-limiting examples of embodiments of the present invention are described below with reference to figures attached hereto and listed below, hi the figures, identical structures, elements or parts that appear in more than one figure are generally labeled with the same numeral in all the figures in which they appear. Dimensions of components and features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale.
For fuller understanding of the objects and aspects of the present invention, preferred embodiments of the invention are described with reference to the accompanying drawings, which show in:
Fig. 1 : A schematic illustration of an embodiment of the system of imagers, in accordance with a preferred embodiment of the present invention;
Fig. 2: A schematic illustration of an embodiment of the tag reader and tag tracker, in accordance with a preferred embodiment of the present invention;
Fig. 3: A schematic illustration of the operation of the tag, in accordance with a preferred embodiment of the present invention;
Fig. 4: A schematic illustration of the global camera calibration consisting of its global position and orientation in accordance with a preferred embodiment of the present invention;
Fig. 5: A schematic illustration of the camera optics calibration including the direction angles corresponding to the imager local image positions, in accordance with a preferred embodiment of the present invention;
Fig. 6: A schematic illustration of the tag imaging calibration in accordance with a preferred embodiment of the present invention;
Fig. 7: A schematic illustration of the real time global position measurement in accordance with a preferred embodiment of the present invention;
Fig. 8: A schematic illustration of an embodiment of the spatio-colored tag, in accordance with a preferred embodiment of the present invention;
Fig. 9: A schematic illustration of an embodiment of the spatio-colored tag, in accordance with an optional embodiment of the present invention;
Fig. 10: A schematic illustration of an embodiment of the infrared imager subsystem, in accordance with an optional embodiment of the present invention;
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Reference is now made to Fig. 1 which shows a schematic layout of the system of the present invention comprising a set of imagers that optionally are mounted on the ceiling of an indoor space 26 that needs to be monitored. The imagers have various imaging parameters and may preferably have light sources associated with them. The system shown in Fig. 1 preferably comprises tracking imagers, known hereinafter as trackers, 10, 11, 12, 13, that image the entire monitored area, 26, through the tracked areas 20, 21, 22, 23 respectively, and identification imagers, known hereinafter as readers, 14 and 15, that image the areas 24 and 25 respectively, which are located near the entrance or exit openings to the space. The readers have higher resolution than the trackers to enable them to identify the object to be tracked. An object, such as a person, a cart, or similar, having a tag, 40, is typically moving along the path, 41. When entering the monitored area, 26, it is identified as it passes through the monitored area, 24, using the reader 14. The tag signal is also detected in the tracker, 10, and although the limited resolution of the tracker makes it generally difficult to accurately read the identity of the tag, its identity is verified indirectly by coordinating the tag positions obtained separately from the reader camera 14 and the tracker camera 10, in a common coordinate system of the monitored site. Furthermore, the tracker may be able to support the location identification by being able to recognize at least some features of the tag or object, such as its overall size, shape, color, or similar. This usefulness of this aspect of the tracker's properties will be described hereinbelow.
The tag is further tracked by a neighboring tracker, 13, as it passes into its field of view, 23. Each tracker further transforms the local camera coordinates of the tag to the global site coordinates, thus allowing for coordination between all the trackers and reader or readers. The above arrangement of the tracking system ensures both high identification performance and large tracking field coverage, by providing the readers and the trackers with separate imaging conditions, each set of cameras using a suitable set of parameters for its particular task, reading or tracking. Thus the two different types of cameras may be considered to do both tasks, but with widely different effectiveness: the reader camera provides definitive identification and can track tags or objects in their limited area coverage, allowing them to track the positions and thus transfer the data; the trackers on the other hand, track the tags in their large area of coverage and preferably have some limited recognition capability to allow them to lock on the tracked tag more efficiently, as will be explained below.
The tag 40 position is tracked by means of a sequential series of images grabbed on all of the cameras, using its path features, 41, including at least its position, and preferably also some of the position derivatives, such as its velocity and direction, acceleration etc., and also preferably using at least some of its recognized image features, such as the tag size, color, length etc. This data is accumulated to form a statistical description of the tag track. The position-based information and its derivatives are used to estimate its future expected spatio-temporal dependent path, 42, and specifically, the next position 50 and region of interest, 51 that is expected at the time of the next grabbed image. The region of interest 51 is the region of position uncertainty around the estimated position, 50, and is the region where the tag is searched for in the next frame. The advantage of some level of recognition facility by the trackers also now becomes apparent. Tracking based only on predicted position and expected behavior may be susceptible to error, if the object makes unusual maneuvers, or if two different objects come close to each other, or if the environment has a high level of background optical noise. In such cases, position tracking alone may lose track of the correct object, and provide false information. Support information provided by even a rudimentary level of recognition, then provides additional information to the trackers in situations where the position tracking may be susceptible to error.
Each object is tracked using its calculated global coordinates. For each successive grabbed image instant, its path in this global space is translated to local space coordinates of each camera, and a region of interest (ROI) around its next expected position is calculated for each camera. ROI's that are located within the frame of each camera are then searched for the detection of the tagged objects. Should an object be detected in some of these ROI's, its presence is confirmed preferably using feature extraction from the detected segments in these ROI's, and these features are compared to known features of these tagged objects, by the known methods of image processing. Any match then causes an accumulation of the featured segment, translated to global coordinates, to the tracked object statistics.
A model is then fitted to the object statistical data history to help in estimating its future path. The estimated position of the object in the next image is the center of the ROI, and the estimation uncertainties correspond to the ROI size: the larger the uncertainty, the larger the ROI, and the searching region for the tag gets larger.
Image acquisition switching logic, based on ROI within images, is used to decide which of the appropriate cameras should be grabbed. Using this logic to utilize only those cameras that image the existing objects within the monitored area, and not the cameras that apparently do not image anything of interest, enables efficient usage of cameras and a decrease of processing bandwidth, as not all the cameras are being grabbed at the same time.
Although the invention is generally described herein using a coded tag, mounted on the object to be tracked, it is to be understood that the invention can equally well be implemented using any other identifying information obtained from the object, such as its size, a predetermined geometrical component, or any other feature that can be used for identification of the object using the methods of image recognition, as are known in the art.
Furthermore, although the invention is generally described herein using an illumination source coaxially mounted with the camera, and an optional retroreflector mounted within the tag, to ensure good visibility and contrast of the tag or object features, it is to be understood that the invention can equally well be implemented using the ambient light and without any retroreflection means, if the camera sensitivity and the ambient light conditions so permit.
Reference is now made to Fig. 2 which is a schematic layout of the construction of a tracker or reader. Each tracker or reader, 30, is comprised of an imager 31, imaging optics, 32 and also optionally, a coaxial light source, 33 that is preferably arranged in a ring around the imaging optics lens 32. Light coming out of the source, 33, is scattered to illuminate all the field of view, 37. Rays, 34, are in the direction of the tag, 40, residing within the imaged field of view.
Reference is now made to Fig. 3 which shows a schematic drawing of the illumination of the tag of the present invention, the tag preferably comprising a retro- reflector. The tag structure is described in more detail in the embodiments of Figs. 9 and 10 hereinbelow, but its information can be spatially coded, chromatically coded, or both, in the form of a two dimensional color pattern. The use of color can increase the tag data capacity and decrease its geometrical overall size. The reader should have a spatio-chromatical resolution sufficient to discern the tag pattern. One common example of a tag is the use of a black & white tag like a barcode and a black & white camera as a reader. The beams, 34, are retro-reflected back, in a specularly scattered pattern around directions 34, to form a beam around the central beams, 35. As shown in Fig. 2, this beam is in the direction of the center of light source, 33, and is aligned with the reader's imaging optics aperture, 32.
The identification reader imaging parameters are preferably selected to optimize the light contrast between the tag brightness and the generally diffuse light brightness of the background, to enhance the tag delectability and to reduce the background noise. This objective is achieved by keeping the reader's aperture small, to decrease the background brightness as much as possible, leaving the higher intensity tag to be imaged and digitized within the reader's imager, 31. This option is preferable in applications where tag speed is low and its distance from the reader may vary over a large range, so that the small optical aperture provides the high depth of field needed for imaging the tag position without evident lack of focus, and the long exposures are adequate for capturing without evident motion blur.
Alternatively the identification camera exposure can be selected to be short by selecting a high shutter speed. This in turn is the option of choice for freezing a high tag speed relative to the camera.
The tracker imaging parameters are selected to get a normally exposed image of the background and saturated light of the tag, by opening the optics aperture 32 normally. The image formed in this way, allows for tag tracking, using its saturated intensity as a tracking feature, and general image analysis, as known in the art.
The coordination of multiple cameras necessitates the use of a system of common global site coordinates, such that the local image coordinates, Pi(Xi5Yi) of each camera have to be transformed to these global site coordinates, P(X5Y5Z), and vise versa. This invention provides for a system and method of using three calibrations: 1. The camera installation calibration; 2. The camera optics calibration; 3. The tag imaging calibration. These calibrations together with the real time measurement data is used to get the coordinate transformed data. To facilitate 3 dimensional global coordinates estimation, a third measurement needs to be added to the two local measurements; this measurement is the tag distance from the camera.
1. The camera calibration: initially the camera is calibrated to find its global model parameters, e.g., 3 position coordinates PC(XC,YC,ZC) and 3 rotation angles (Rx5Ry1Rz) using methods as known in the art (for instance, page 67 of the book "Digital image processing" by R. C. Gonzalez, R. E. Woods. Addison-Wesly, September, 1993). Reference is now made to Fig. 4, which illustrates the calibration of the camera global position and the camera pan and tilt. The global calibration point, 40a, is viewed perpendicularly to the global X direction, 64. This point is selected such that its camera local image counterpart lies in the image center, thus it also lies on the camera optical axis, 61. The camera tilt, 63, is given by the angle between the camera optical axis, 61 and the camera plummet, 62. It is measured using the known global points, 40a and 60. This procedure is repeated with the camera pan.
2. The camera optics calibration: Reference is now made to Fig. 5, which describes the method of correlating camera local image positions and their corresponding direction angles relative to the camera optical axis. This is done by measuring the relation between the image of a calibration point, 40a, that has a local camera location, 43, lying along a radial ray originating from the camera image center and their consequent global direction angles, 39, as measured between the rays, 35, in the direction of the calibration point, 40a, and the camera optical axis, 38.
3. The tag imaging calibration: Reference is now made to Fig. 6, which describes the tag imaging calibration. The imaged tag, 43b, size and brightness and their relations are initially used to produce a-priori calibration data of these features in relation to the tag distance, 72. The tag size and brightness are functions of at least some of the tag true size, the camera illumination level, the tag distance to the camera, the tag location angle relative to the camera's illumination axis, 69, and as shown in Fig. 6a, the tag normal, 73, tilt, 74, relative to the direction of the camera axis, 71. The tag distance is the unknown, while the tag size and the illumination level are a-priori known. As the tag size and brightness vary differently with the distance, tag location angle and tag normal tilt, the particular tag size and brightness in a specific location can be calibrated.
According to a further preferred embodiment of the present invention, the tag can preferably be made in the shape of a sphere. This provides the advantage that its image is independent of its orientation, thereby simplifying the calibration procedure.
Reference is now made to Fig. 7, which illustrates the real-time measurement of a global 3D position. The tag distance 72 is first measured using its distance dependent features. Once the tag distance has been estimated, the local image position of the tag, 43b, is used to estimate the global line, 71 between the tag 40b, at a distance 72 from the camera, and the camera located at position, 60. The equation of the global line is determined from the local position of the tagged image 43b, and the prior camera calibration as explained above. The tag global position, 40b on the line equation 72 is then found by fitting its measured distance, 72 into this line equation.
An alternative way of describing this geometry is by taking the distance 72 as the radius of a sphere 73 having the camera global position, 60, as its center and the tag to the camera distance, 71, as its radius. The intersection of these two geometrical locations is the global position of the tag, 40b.
The global direction angles, 69, of the tag to be positioned, 40b, is simply obtained from the local camera direction angles, 39, shown in fig. 4 and the camera tilt angle, 63.
This is facilitated by the use of the local camera rolling, prior to the calculation of the global direction angles.
Reference is now made to Fig. 8, which shows a spatio-spectral coded tag, 80, with retro-reflective layers to code the information and enhance its response to an interrogating beam of light coming of the reader direction. The tag colored strips are parallel to each other as indicated in the drawing.
Reference is now made to Fig. 9, which illustrate yet another option where the color layers are concentric. These are just examples of the spatial arrangement of the colored strips and many other arrangements are possible. The reader uses coaxial illumination of the field of view. In turn the color-coded retro-reflective tag causes the tag reflection to be very bright, such that the reader can work with a very low F- number, darkening the background and emphasizing the colored-tag.
Reference is now made to Fig. 10, which illustrate a covert option of this arrangement that use color infra-red camera, in which three black and white imagers, 100, are employed with different Bandpass filters in the infrared region, 110, a broadband infrared illumination is used in coaxial arrangement with this camera, 120, and a respective retro-reflective colored tag with different spectral filters with designated reflectance in the infrared zone. A processing means is used to acquire the camera video and analyze it. Another option is to acquire 3 images in three different wavelength of the IR region using a 3CCD camera.
The system and methods of the present invention may be advantageously used within existing CCTV camera tracking networks, where the cameras are already installed and the central video server is linked to all of the cameras. In order to adapt such an existing system to use the present invention, there may still be need to fit illumination units 33, as described in Fig. 2, and some additional readers in the inspected zone entrances, corridor and heavily used paths. The bright reflectance of the tag can be used as an identified and positioned hooking point for any scene analysis functions; for example, the tag can be attached to a shopping cart that needs to be identified and positioned, so that the customer could be tracked without tagging him and thus invading his privacy. Any customer holding the cart could be recognized as the cart owner and further identification and tracking after that customer could be performed by tracking the cart.
According to a further preferred embodiment of the present invention, at times when the cart owner leaves the cart, a tracking algorithm for following the customer's movements by video scene analysis, such as is known in the prior art, can be used. However, using such a prior art tracking system, without the benefit of the present invention, the customer could get lost from the surveillance system as can often happen when the person goes behind another object or else mingles within the crowd. In prior art systems, the customer's path would then be lost completely from that point on. However, according to this embodiment of the present invention, when the customer comes back to his cart and holds it again, he could then be recognized again as the cart owner and his track can be merged with the tagged cart track, such that his tracked path is regained.
It is to be understood that the combination of the tracked tag according to the present invention, with scene analysis algorithms, can be utilized for many different applications besides that described hereinabove.

Claims

CLAIMSI claim:
1. A method for tracking within a volume an object having identifying information, comprising the steps of: viewing said volume with at least one tracking camera having a first resolution sufficient to track the position of said object within said volume; tracking the position of said object in said volume by means of signal processing of images of said at least one camera; viewing a selected part of said volume with an identification camera having a higher resolution than that of said at least one tracking camera, and sufficient to identify said information; identifying said information by means of signal processing images obtained by said identification camera and determining the position of said object within said part of said volume; and correlating said position of said object within said part of said volume determined by said identification camera with its position determined by said at least one tracking camera, such that said at least one tracking camera also acquires said identifying information.
2. The method of claim 1 wherein said identifying information is a known feature of said object.
3. The method of claim 1 wherein said identifying information is coded within a tag.
4. The method of claim 3 wherein said tag comprises spatial information, and said resolution of said identifying cameras is spatial resolution.
5. The method of claim 3 wherein said tag comprises chromatic information, and said resolution of said identifying cameras is chromatic resolution.
6. The method of any of the previous claims 3 to 5, and also comprising the step of illuminating at least said part of said volume, and wherein said tag is such as to enhance its optical contrast against the background.
7. The method of claim 6 wherein said step of illuminating is performed along the optical axis of said identifying camera, and said optical contrast is enhanced by use of a retroreflector which reflects illumination back essentially along said optical axis of said identifying camera.
8. The method of claim 6, and wherein said at least part of said volume is all of said volume.
9. The method of claim 8 wherein said step of illuminating is also performed along the optical axes of said at least one tracking camera, and said optical contrast is enhanced by use of a retroreflector which reflects said illumination back essentially along said optical axis of said at least one tracking camera.
10. The method of claim 9 and wherein said identification camera uses an imaging aperture smaller than that of said at least one tracking camera.
11. The method of claim 9 and wherein said identification camera uses an exposure time shorter than that of said at least one tracking camera
12. A method according to any of the previous claims, and wherein said illuminating is performed in the IR band.
13. A method according to any of the claims 3 to 12, and wherein said tag is a passive tag.
14. A method according to any of claims 3 to 12, and wherein said tag is an active tag.
15. A method according to any of claims 3 to 12 and wherein said position of said object is determined in three dimensions by the steps of: imaging said known feature of said object with one of said at least one cameras, and using said image to determine the distance of said object from said one camera; defining a sphere centered on said camera and having a radius equal to said distance of said object from said camera; defining the direction of said object relative to said camera by means of a two dimensional image of said object; and determining the coordinates of the position of said object in three dimensions by the intersection of said direction with said sphere.
16. A method according to claim 15 and wherein said known feature of said object is said tag.
17. The method of any of the previous claims and wherein said volume is a zone under surveillance, and said at least part of said volume is located adjacent to an entrance to said zone.
18. The method of any of the previous claims and wherein said object has a user associated therewith, said method also comprising the steps of: tracking said user by means of video scene analysis algorithms, such that said user can also be tracked when distant from said object, and tracking said user by tracking said object once said user becomes re- associated with said object.
19. A system for tracking within a volume an object having identifying information, comprising: at least one tracking camera viewing said volume, said at least one tracking camera having a first resolution sufficient to track the position of said object within said volume; a signal processor utilizing images of said object from said at least one tracking camera to track the position of said object in said volume; an identification camera viewing a selected part of said volume, said identification camera having a higher resolution than that of said at least one tracking camera, said identification camera identifying said information and determining the position of said object within said selected part of said volume ; wherein said signal processor also correlates said position of said object within said part of said volume determined by said identification camera with its position determined by said at least one tracking camera, such that said at least one tracking camera also acquires said identifying information.
20. The system of claim 19 wherein said identifying information is a known feature of said object.
21. The system of claim 19 wherein said identifying information is coded within a tag.
22. The system of claim 21 wherein said tag comprises spatial information, and said resolution of said identifying cameras is spatial resolution.
23. The system of claim 21 wherein said tag comprises chromatic information, and said resolution of said identifying cameras is chromatic resolution.
24. The system of any of the previous claims 21 to 23, and also comprising a source for illuminating at least said part of said volume, and wherein said tag is such as to enhance its optical contrast against the background.
25. The system of claim 24 wherein said illuminating source is directed along the optical axis of said identifying camera, and said optical contrast is enhanced by use of a retroreflector which reflects illumination back essentially along said optical axis of said identifying camera.
26. The system of claim 24, and wherein said at least part of said volume is all of said volume.
27. The system of claim 26 also comprising at least one additional illuminating source directed along the optical axes of at least one of said at least one tracking cameras, and said optical contrast is enhanced by use of a retroreflector which reflects said illumination back essentially along said optical axis of said at least one tracking camera.
28. The system of claim 27 and wherein said identification camera uses an imaging aperture smaller than that of said at least one tracking camera.
29. The system of claim 27 and wherein said identification camera uses an exposure time shorter than that of said at least one tracking camera
30. A system according to any of the previous claims, and wherein said illuminating source emits in the IR band.
31. A system according to any of the claims 21 to 30, and wherein said tag is a passive tag.
32. A system according to any of claims 21 to 30, and wherein said tag is an active tag.
33. A system according to claim 21 and wherein said known feature of said object is said tag.
34. A system according to any of the previous claims 21 to 33 and wherein said volume is a zone under surveillance, and said at least part of said volume is located adjacent to an entrance to said zone.
35. A system according to any of the previous claims, and wherein said object has a user associated therewith, such that said system tracks said user when close to said object, said system also comprising: video analysis algorithms, utilizing said at least one tracking camera and said identification camera, for tracking said user when distant from said object.
36. A method of determining the coordinates in three dimensions of the position in a volume of an object, an image of said object having a feature having characteristics which are dependent on the distance of said object from an imaging camera, comprising the steps of: using an image of said feature to determine the distance of said object from said camera; defining a sphere centered on said camera and having a radius equal to said distance of said object from said camera; defining the direction of said object relative to said camera by means of a two dimensional image of said object; and determining the coordinates of said object in three dimensions by the intersection of said direction on said sphere.
37. A method according to claim 36, and wherein said feature is a known dimension of said object, and wherein said determining the distance of said object from said camera is performed by comparing said measured size of said image of said known dimension with the true known dimension.
38. A method according to claim 36, and wherein said feature is the brightness of said object, and wherein said determining the distance of said object from said camera is performed by comparing said brightness with known brightness's predetermined from images taken at different distances.
PCT/IL2005/000998 2004-09-16 2005-09-16 Imaging based identification and positioning system WO2006030444A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US61018204P 2004-09-16 2004-09-16
US60/610,182 2004-09-16

Publications (2)

Publication Number Publication Date
WO2006030444A2 true WO2006030444A2 (en) 2006-03-23
WO2006030444A3 WO2006030444A3 (en) 2009-04-23

Family

ID=36060426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2005/000998 WO2006030444A2 (en) 2004-09-16 2005-09-16 Imaging based identification and positioning system

Country Status (1)

Country Link
WO (1) WO2006030444A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008113648A1 (en) * 2007-03-20 2008-09-25 International Business Machines Corporation Event detection in visual surveillance systems
WO2010042628A3 (en) * 2008-10-07 2010-06-17 The Boeing Company Method and system involving controlling a video camera to track a movable target object
DE102010035834A1 (en) * 2010-08-30 2012-03-01 Vodafone Holding Gmbh An imaging system and method for detecting an object
WO2012152592A1 (en) 2011-05-07 2012-11-15 Hieronimi, Benedikt System for evaluating identification marks, identification marks and use thereof
WO2013105084A1 (en) * 2012-01-09 2013-07-18 Rafael Advanced Defense Systems Ltd. Method and apparatus for aerial surveillance
RU2494567C2 (en) * 2007-05-19 2013-09-27 Видеотек С.П.А. Environment monitoring method and system
US10074180B2 (en) 2014-02-28 2018-09-11 International Business Machines Corporation Photo-based positioning
CN109215073A (en) * 2017-06-29 2019-01-15 罗伯特·博世有限公司 For adjusting method, monitoring arrangement and the computer-readable medium of video camera
US10943088B2 (en) 2017-06-14 2021-03-09 Target Brands, Inc. Volumetric modeling to identify image areas for pattern recognition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7483049B2 (en) * 1998-11-20 2009-01-27 Aman James A Optimizations for live event, real-time, 3D object tracking

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008113648A1 (en) * 2007-03-20 2008-09-25 International Business Machines Corporation Event detection in visual surveillance systems
RU2494567C2 (en) * 2007-05-19 2013-09-27 Видеотек С.П.А. Environment monitoring method and system
US8199194B2 (en) 2008-10-07 2012-06-12 The Boeing Company Method and system involving controlling a video camera to track a movable target object
WO2010042628A3 (en) * 2008-10-07 2010-06-17 The Boeing Company Method and system involving controlling a video camera to track a movable target object
DE102010035834A1 (en) * 2010-08-30 2012-03-01 Vodafone Holding Gmbh An imaging system and method for detecting an object
WO2012152592A1 (en) 2011-05-07 2012-11-15 Hieronimi, Benedikt System for evaluating identification marks, identification marks and use thereof
CN103649775A (en) * 2011-05-07 2014-03-19 贝内迪克特·希罗尼米 System for evaluating signatures, signatures and uses thereof
JP2014517272A (en) * 2011-05-07 2014-07-17 ヒエロニミ、ベネディクト System for evaluating identification marks, identification marks, and uses thereof
US8985438B2 (en) 2011-05-07 2015-03-24 Benedikt Hieronimi System for evaluating identification marks and use thereof
CN103649775B (en) * 2011-05-07 2016-05-18 贝内迪克特·希罗尼米 System for evaluating an identification mark, identification mark and use thereof
WO2013105084A1 (en) * 2012-01-09 2013-07-18 Rafael Advanced Defense Systems Ltd. Method and apparatus for aerial surveillance
US10074180B2 (en) 2014-02-28 2018-09-11 International Business Machines Corporation Photo-based positioning
US10943088B2 (en) 2017-06-14 2021-03-09 Target Brands, Inc. Volumetric modeling to identify image areas for pattern recognition
CN109215073A (en) * 2017-06-29 2019-01-15 罗伯特·博世有限公司 For adjusting method, monitoring arrangement and the computer-readable medium of video camera

Also Published As

Publication number Publication date
WO2006030444A3 (en) 2009-04-23

Similar Documents

Publication Publication Date Title
RU2251739C2 (en) Objects recognition and tracking system
US7889232B2 (en) Method and system for surveillance of vessels
KR101686054B1 (en) Position determining method, machine-readable carrier, measuring device and measuring system for determining the spatial position of an auxiliary measuring instrument
US20050012817A1 (en) Selective surveillance system with active sensor management policies
US11080881B2 (en) Detection and identification systems for humans or objects
US20030123703A1 (en) Method for monitoring a moving object and system regarding same
US7295106B1 (en) Systems and methods for classifying objects within a monitored zone using multiple surveillance devices
US20060028552A1 (en) Method and apparatus for stereo, multi-camera tracking and RF and video track fusion
US11830274B2 (en) Detection and identification systems for humans or objects
KR101754407B1 (en) Car parking incoming and outgoing control system
KR20170091677A (en) Method and system for identifying an individual with increased body temperature
WO2003003721A1 (en) Surveillance system and methods regarding same
JP2004533682A (en) Method and apparatus for tracking with identification
CN112513870B (en) Systems and methods for detecting, tracking, and counting human subjects of interest using improved height calculations
US20020052708A1 (en) Optimal image capture
KR101752586B1 (en) Apparatus and method for monitoring object
WO2006030444A2 (en) Imaging based identification and positioning system
RU2595532C1 (en) Radar system for protection of areas with small-frame video surveillance system and optimum number of security guards
US10591603B2 (en) Retroreflector acquisition in a coordinate measuring device
US11734833B2 (en) Systems and methods for detecting movement of at least one non-line-of-sight object
US11441901B2 (en) Optical surveying instrument
Li et al. Counting and tracking people to avoid from crowded in a restaurant using mmwave radar
EP3510573B1 (en) Video surveillance apparatus and method
Zhang et al. A robust human detection and tracking system using a human-model-based camera calibration
US20030146972A1 (en) Monitoring system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 05784941

Country of ref document: EP

Kind code of ref document: A2