[go: up one dir, main page]

HK1148345A - Method of and arrangement for mapping range sensor data on image sensor data - Google Patents

Method of and arrangement for mapping range sensor data on image sensor data Download PDF

Info

Publication number
HK1148345A
HK1148345A HK11102326.8A HK11102326A HK1148345A HK 1148345 A HK1148345 A HK 1148345A HK 11102326 A HK11102326 A HK 11102326A HK 1148345 A HK1148345 A HK 1148345A
Authority
HK
Hong Kong
Prior art keywords
range sensor
data
point cloud
sensor data
computer arrangement
Prior art date
Application number
HK11102326.8A
Other languages
Chinese (zh)
Inventor
克日什托夫‧米克萨
拉法尔‧扬‧格利什琴斯基
卢卡什‧彼得‧塔博罗维斯基
Original Assignee
电子地图有限公司
Filing date
Publication date
Application filed by 电子地图有限公司 filed Critical 电子地图有限公司
Publication of HK1148345A publication Critical patent/HK1148345A/en

Links

Abstract

Method of and arrangement for mapping first range sensor data from a first range sensor (3(1)) to image data from a camera (9(j)). The method includes: receiving time and position data from a position determination device on board a mobile system, as well as the first range sensor data from the first range sensor (3(1)) on board the mobile system and the image data from the camera (9(j)) on board the mobile system; identifying a first points cloud within the first range sensor data, relating to at least one object; producing a mask relating to the object based on the first points cloud; mapping the mask on object image data relating to the same object as present in the image data from the at least one camera (9(j)); performing a predetermined image processing technique on at least a portion of the object image data.

Description

Method and arrangement for mapping range sensor data onto image sensor data
Technical Field
The present invention relates to the field of capturing and processing image and range sensor data with an image sensor, such as a camera, on a moving vehicle, such as a Mobile Mapping System (MMS), and mapping these data onto range sensor data obtained by at least one range sensor positioned on the same moving vehicle.
In an embodiment, the invention also relates to the field of removing privacy sensitive data from such images. The privacy-sensitive data may relate to objects that are moving relative to the fixed world (i.e., the world).
Background
In some MMS applications, the invention will capture, among other things, pictures of building facades and other fixed objects, such as trees, street signs, and streetlights, which are later used in "real world" 2D and/or 3D street images used, for example, in car navigation systems. These images are then displayed to a driver of a car provided with such a navigation system so that the driver sees on the screen of the navigation system a 2D and/or 3D image corresponding to a real world view when looking through the window of the car. Such pictures may also be used in applications other than car navigation systems, for example in games that may be played on a computer as a stand-alone system or as a collaborative system in a networked environment. Such an environment may be the internet. The solution of the invention as presented below is not limited to a particular application.
However, millions of such MMS images may contain private information, such as human faces and readable car license plates, that is inadvertently present on the images. It is not desirable to use such images in public applications with such private or other undesirable information still intact. For example, newspapers have been made about the general health of GoogleTMThere are reports of such undesirable information in the images used in published street map views. The images taken in real world situations represent static and moving objects in the vicinity of the MMS. In the image, objects with such private or other undesirable information may be relative to the fixed worldStatic or moving objects. One has to identify such objects in the image taken by the camera on the MMS. Some prior art applications have sought to identify moving objects based only on image properties and determine their movement trajectories based on the properties of color pixels in the image sequence. However, this approach only works if the object can be determined on more than two images in sequence to determine the trajectory.
Others have disclosed systems in which other types of sensors are used to determine a short time trajectory approximation of an object relative to a vehicle in which such sensors are disposed. Such sensors may include laser scanners, radar systems, and stereo cameras. Such a system is mentioned, for example, in the introduction to EP 1418444. This document relates to real-time applications where the relative position and speed of an object relative to a vehicle is important, for example, to avoid accidents between vehicles and objects. The document does not disclose how position and velocity data obtained by the sensors can be mapped on image data obtained by the stereo camera. Furthermore, it does not disclose how to determine the absolute position and absolute velocity of such objects. Here, "absolute" is to be understood in an absolute sense with respect to a fixed real world as determined by the earth and objects fixed to the earth (e.g., buildings, traffic signs, trees, mountains, etc.). Such a real world may be defined, for example, by a reference grid used by the GPS system. Furthermore, this document does not address how to process privacy sensitive data in images taken by a camera.
For example, the use of laser scanner data to help identify the location of building footprints is described in co-pending patent application PCT/NL 2006/050264.
Disclosure of Invention
It is an object of the present invention to provide a system and method that allows for the accurate detection of objects present in a series of images taken by one or more cameras that take pictures as they are moving, for example, because they are positioned on a mobile mapping system.
To this end, the invention provides a computer arrangement comprising a processor and a memory connected to the processor, the memory comprising a computer program comprising data and instructions arranged to allow the processor to:
● receiving time and location data from a location determining device onboard a mobile system and first range sensor data from at least a first range sensor onboard the mobile system and image data from at least one camera onboard the mobile system;
● identifying a first point cloud within the first range sensor data relating to at least one object;
● generating a mask associated with the object and based on the first point cloud;
● mapping the mask on object image data relating to the same object present in the image data from the at least one camera;
● perform a predetermined image processing technique on at least a portion of the object image data.
The position data may contain orientation data.
The range sensor provides such point clouds relating to different objects. Since the objects are not located on the same location, the points associated with each of such point clouds display clearly different distances and/or orientations to the range sensor (depending on which object the points belong to). Thus, using these range differences relative to the range sensor, masks associated with different objects can be easily made. These masks may then be applied to the image taken by the camera to identify objects in the image. This result is a reliable way of identifying those objects and earlier than relying on image data only.
This approach proves to be extremely effective if the object is not moving relative to the fixed world. However, such objects may be moving. Then the accuracy of such detection is reduced. Hence, in an embodiment, the invention relates to a computer arrangement as defined above, wherein the computer program is arranged to allow the processor to:
● receiving second range sensor data from a second range sensor onboard the mobile system;
● identifying a second point cloud within the second range sensor data relating to the same at least one object;
● calculating a motion vector of the at least one object from the first point cloud and the second point cloud;
● map the mask onto the subject image data while using the motion vector.
In this embodiment, the system determines the absolute position and absolute velocity of the object at the time the image was taken, and the system uses a short-time approximation of the movement trajectory of the object. That is, because the time period involved between successive images and successive range sensor scans is extremely short, one can assume that all motion is generally linear and can identify objects that are moving relative to the fixed world with great accuracy. To this end, it uses range sensor scanning of at least two range sensors, which, because they are spaced apart and differently oriented, provide data that can be readily used to identify the position and movement of an object. This reduces the problems associated with identifying moving objects based only on image properties and determining their movement trajectories based on color pixel properties in the image sequence, as is known from the prior art.
Furthermore, when an extremely accurate method is used to determine the absolute position and speed of the MMS system that is supporting the camera and sensor, it is also possible to obtain an extremely accurate absolute position and short-time trajectory estimation of at least one point of an object present on the image taken by the camera in the vicinity of the MMS system. This allows reconstruction of the 3D position of the image pixels, which improves the value of such images, and improved reconstruction of objects present in so-called road corridors seen by the driver of the vehicle. Not only can image filtering techniques be used, but spatial features (such as spatial separation, actual size, size variation) can also be added to approximate the principal axis of the object or a reference point of the object. This increases the value of the filtering.
By adding a 3D spatial aspect to the image (i.e., by adding a z-component to the object in the image), even objects with the same color space properties (e.g., near shrubs versus far trees) can be effectively separated.
A pixel may be associated with a position in (absolute) space. By doing so, the results of one image analysis and filtering (such as face detection or text detection) can be easily communicated and applied to other images having the same mapping to absolute space.
In some cases, an ROI (region of interest) on one image with unreadable properties may be selected and used to determine the same properties in the ROI of another image. For example, ROIs of two or more different images may be mapped to the same ROI in space and properties from the second image may be applied to images taken earlier or later in time. Thus, using range sensor measurements may link algorithms performed on multiple images.
The principles of the present invention may be applied while using any type of range sensor, such as a laser scanner, radar or lidar. The images may be taken by any type of camera carried by any suitable vehicle, including airborne vehicles.
A camera on an MMS system can take successive pictures in time so that it reproduces several pictures with overlapping parts of the same scene. In such overlapping parts there may be objects that will thus be displayed in several pictures. If an image filtering technique is to be applied to such an object, then by using the method of this invention, this technique need only be applied to the object in one of these pictures, typically the picture taken for the first time in time. This will result in an image processed object that can be used in all pictures it exists. This saves significant computation time.
When the speed of the object is large relative to the speed of the camera, the important factor is the observed size of the object, since the larger the speed difference between the two, the larger the deviation of the observed size from the actual size. Therefore, if one wishes to map the range sensor data onto the image data, this effect must be compensated for. Thus, in a further embodiment, the size of the object is determined from the short time trajectory data. In this embodiment, a short time approximation of the movement estimation of the object allows the computer arrangement to apply a shape correction procedure to point clouds (obtained by the range sensor) associated with (fast) moving objects, resulting in a better correspondence between such point clouds and the object in the image taken by the camera.
Drawings
The present invention will be explained in detail with reference to some drawings, which are only intended to show embodiments of the invention and not to limit the scope. The scope of the invention is defined in the appended claims and by their technical equivalents.
The drawings show:
FIG. 1 shows an MMS system with a camera and laser scanner;
FIG. 2 shows a graphical representation of position and orientation parameters;
FIG. 3 shows a schematic top view of an automobile with two cameras and two range sensors on its roof;
FIG. 4a shows a diagrammatic representation of a computer arrangement by means of which the invention may be carried out;
FIG. 4b shows a flow chart of a basic process according to an embodiment of the invention;
FIG. 5 shows an image of a non-moving object;
FIG. 6 shows an example of range sensor data obtained by one of the range sensors relating to the same scene visible on the image of FIG. 5;
FIG. 7 shows how the data of FIG. 6 may be used to generate a mask;
FIG. 8 shows how the mask of FIG. 7 may be used to identify an object in the image shown in FIG. 5;
fig. 9 shows a result of blurring the image of the subject shown in fig. 8;
FIG. 10 shows an example of a picture taken by one of the cameras;
FIG. 11 shows an example of range sensor data obtained by one of the range sensors relating to the same scene visible on the image of FIG. 10;
FIG. 12 shows how the data of FIG. 11 may be used to generate a mask;
FIG. 13 shows how the mask of FIG. 12 may be used to identify an object in the image shown in FIG. 10;
FIG. 14 shows a flow chart of a basic process according to an embodiment of the invention;
15 a-15 c show the position of an object relative to a car equipped with a camera and two range sensors at successive time instants;
FIG. 16 shows a range sensor measurement point cloud for an object;
FIG. 17 shows a cylinder as used in the model to calculate the centroid of an object;
FIG. 18 shows two range sensors pointing in different directions;
FIG. 19 shows how the object may be identified in an image while using the mask shown in FIG. 12 and using velocity estimation of the moving object;
FIG. 20 shows the object as identified by the mask;
fig. 21 shows the result of blurring the image of the subject shown in fig. 20;
fig. 22 shows how the true size and shape of a moving object can be determined.
Detailed Description
The present invention relates generally to the field of processing images taken by a camera on a Mobile Mapping System (MMS). More specifically, in some embodiments, the invention relates to enhancing such images or identifying (moving) objects in such images and eliminating privacy sensitive data in these images. However, other applications covered by the scope of the appended claims are not excluded. For example, the camera may be carried by any other suitable vehicle, such as an airborne vehicle.
Fig. 1 shows an MMS system in the form of a car 1. The car 1 is provided with one or more cameras 9(I) (I ═ 1, 2, 3.. I) and one or more laser scanners 3(J) (J ═ 1, 2, 3.. J). In the context of the present invention, information from at least two or more laser scanners 3(j) is used if a moving object has to be identified. The car 1 may be driven by a driver along a road of interest. The laser scanner 3(j) may be replaced by any kind of range sensor that allows detecting the distance between the range sensor and an object sensed by the range sensor for a certain set of orientations. Such an alternative range sensor may be, for example, a radar sensor or a lidar sensor. If a radar sensor is used, its range and orientation measurement data should be comparable to those as obtainable with a laser scanner.
The term "camera" is understood herein to include any type of image sensor, including, for example, LadybugTM
The automobile 1 includes a plurality of wheels 2. Furthermore, the car 1 is provided with a high-precision position/orientation determining device. Such a device is arranged to provide 6 degrees of freedom data regarding the position and orientation of the car 1. One embodiment is shown in FIG. 1. As shown in fig. 1, the position/orientation determining apparatus includes the following components:
● GPS (global positioning system) unit connected to the antenna 8 and arranged to communicate with a plurality of satellites SLk (k 1, 2, 3.) and to calculate position signals from signals received from the satellites SLk. The GPS unit is connected to the microprocessor mup. The microprocessor mup is arranged to store time-dependent data received from the GPS unit. This data will be sent to an external computer arrangement for further processing. In an embodiment, based on the signal received from the GPS unit, the microprocessor μ P can determine a suitable display signal to be displayed on the monitor 4 in the car 1, informing the driver of the location of the car and in which direction the car may be travelling.
● DMI (distance measuring instrument). This instrument is an odometer that measures the distance traveled by the car 1 by sensing the number of revolutions of one or more of the wheels 2. The DMI is also connected to the microprocessor μ P. The microprocessor mup is arranged to store time-dependent data received from the DMI. This data will also be sent to an external computer arrangement for further processing. In one embodiment, the microprocessor μ P takes into account the distance measured by the DMI while calculating a display signal from the output signal from the GPS unit.
● IMU (inertial measurement unit). Such an IMU may be implemented as three gyroscopic units arranged to measure rotational acceleration and three translational accelerometers along three orthogonal directions. The IMU is also connected to the microprocessor μ P. The microprocessor mup is arranged to store time-varying data received from the IMU. This data will also be sent to an external computer arrangement for further processing.
The system as shown in fig. 1 collects geographical data, for example by taking pictures with one or more cameras 9(i) mounted on the car 1. The camera is connected to the microprocessor μ P. Further, while the automobile 1 is traveling along the road of interest, the laser scanner 3(j) takes a laser sample. Thus, the laser samples include data relating to the environment associated with these roads of interest and may include data relating to building buildings, to trees, traffic signs, parked vehicles, people, and the like.
The laser scanner 3(j) is also connected to the microprocessor μ P and sends these laser samples to the microprocessor μ P.
It is generally desirable to provide position and orientation measurements from the three measurement units GPS, IMU and DMI as accurately as possible. These position and orientation data are measured when the camera 9(i) takes a picture and the laser scanner 3(j) takes a laser sample. Both the pictures and the laser samples are stored for later use in a suitable memory of the microprocessor μ P in combination with the corresponding position and orientation data of the car 1 collected when these pictures and laser samples were taken. An alternative to collecting all data from the GPS, IMU, DMI, camera 9(i) and laser scanner 3(j) in time is to time stamp all of these data and store the time stamped data in the microprocessor's memory along with other data. Other time synchronization markers may alternatively be used.
The pictures and laser samples include, for example, information about the facade of a building. In an embodiment, the laser scanner 3(j) is arranged to generate an output with a minimum 50Hz and 1deg resolution to generate a sufficiently dense output for the method. A laser scanner such as MODEL LMS291-S05 manufactured by SICK can produce this output.
Fig. 2 shows which location signals can be obtained from the three measurement units GPS, DMI and IMU shown in fig. 1. Figure 2 shows that the microprocessor mup is arranged to calculate 6 different parameters, namely 3 distance parameters x, y, z relative to an origin in a predetermined coordinate system and respectively ωx、ωy、ωzAnd 3 angular parameters representing rotation about the x-axis, y-axis, and z-axis, respectively. The z direction coincides with the direction of the gravity vector.
Fig. 3 shows an MMS with two range sensors 3(1), 3(2), which may be laser scanners but alternatively may be, for example, radars, and two cameras 9(1), 9 (2). The two range sensors 3(1), 3(2) are arranged on the roof of the vehicle 1 such that they point to the right of the vehicle 1 when viewed relative to the direction of travel of the vehicle 1. The scanning direction of the range sensor 3(1) is indicated by SD1 and the scanning direction of the range sensor 3(2) is indicated by SD 2. The camera 9(1) is also viewed to the right, i.e. it can be directed perpendicular to the direction of travel of the car 1. The camera 9(2) is viewed in the direction of travel. This arrangement is suitable for all those countries where the vehicle is travelling in the right lane. The settings preferably change for those countries where the vehicle is driving on the left side of the street, in the sense that the camera 9(1) and the laser scanners 3(1), 3(2) are positioned on the left side of the roof of the car (likewise, "left" is defined relative to the driving direction of the car 1). It should be understood that many other configurations may be used by those skilled in the art.
The microprocessor in the car 1 may be implemented as a computer arrangement. An example of such a computer arrangement is shown in fig. 4 a.
In fig. 4a, a diagrammatic view of a computer arrangement 10 is given, which comprises a processor 11 for performing arithmetic operations.
The processor 11 is connected to a plurality of memory components, including a hard disk 12, Read Only Memory (ROM)13, Electrically Erasable Programmable Read Only Memory (EEPROM)14, and Random Access Memory (RAM) 15. Not all of these memory types need necessarily be provided. Furthermore, these memory components need not be physically located close to the processor 11 but may be located remotely from the processor 11.
The processor 11 is also connected to means for inputting instructions, data etc., such as a keyboard 16 and a mouse 17, for example, by a user. Other input means known to those skilled in the art may also be provided, such as a touch screen, a trackball and/or a voice converter.
A reading unit 19 is provided which is connected to the processor 11. The reading unit 19 is arranged to read data from and possibly write data on a data carrier, such as a floppy disk 20 or a CDROM 21. Other data carriers may be magnetic tapes, DVDs, CD-R, DVD-R, memory sticks, etc., as known to those skilled in the art.
The processor 11 is also connected to a printer 23 for printing out data on paper and to a display 18, such as a monitor or LCD (liquid crystal display) screen or any other type of display known to those skilled in the art.
The processor 11 may be connected to a microphone 29.
The processor 11 may be connected by way of the I/O means 25 to a communications network 27, such as the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), the internet, etc. The processor 11 may be arranged to communicate with other communication arrangements over a network 27. These connections may not be connected in real time as the vehicle collects data while moving along the street.
The data carrier 20, 21 may comprise a computer program product in the form of data and instructions arranged to provide the processor with the ability to perform a method according to the invention. Alternatively, however, this computer program product may be downloaded via the telecommunications network 27.
The processor 11 may be implemented as a stand-alone system, or as a plurality of parallel operating processors each arranged to implement sub-tasks of a larger computer program, or as one or more main processors having a number of sub-processors. Part of the functionality of the present invention may even be implemented by a remote processor communicating with the processor 11 over the network 27.
It can be seen that the computer arrangement need not have all of the components shown in fig. 4a when applied in a car 1. For example, the computer arrangement need not have a microphone and a printer. As for the implementation in the car 1, the computer arrangement requires at least a processor 11, some processor to store suitable programs and some kind of interface to receive instructions and data from the operator and to display output data to the operator.
For post-processing of the pictures, scans and stored position and orientation data taken by the camera 9(i), laser scanner 3(j) and position/orientation measuring device, respectively, an arrangement similar to that shown in fig. 4a will be used, but this arrangement will not be located in the car 1 but may conveniently be located in a building for off-line post-processing. The pictures, scans and position/orientation data taken by the camera 9(i), scanner 3(j) and position/orientation measurement device are stored in one of the memories 12-15. This storage may be accomplished by first storing the picture, scan and position/orientation data on a DVD, memory stick or the like or transmitting it (possibly wirelessly) from memory 12, 13, 14, 15. All measurements are preferably also time stamped and these various time measurements are also stored.
In an embodiment of the invention, the arrangement shown in fig. 1 should be able to distinguish certain objects in the image taken by the camera 9 (i). Its absolute position and, optionally, its absolute velocity at the time the image was taken should be determined as accurately as possible. For example, they should be identified to process at least a portion thereof so that the processed portion is no longer clearly visible to a viewer. For example, such portions may relate to human faces, car license numbers, political statements in bulletin boards, commercial announcements, brand names, recognizable characteristics of copyrighted objects, and the like. One action during processing of the images may be to ignore data relating to the building facade in the image taken by the camera 9 (i). That is, the data associated with the building facade remains unchanged in the final application. Location data regarding where such building facades are can be obtained by using the methods described and claimed in co-pending patent application PCT/NL 2006/050264. Of course, other techniques for identifying the location of a building facade may be used within the context of the present invention. In the case of removing such building facade data, the removal of privacy sensitive data will only be relevant to objects present between the mobile mapping system and such facades.
Such objects may be moving objects relative to the fixed world. Such moving objects may be people and cars. Identifying moving objects in an image may be more difficult than identifying fixed objects. By using only one laser scanner 3(j), one can identify non-moving objects and map them appropriately to the image, but identifying moving objects in the image and then processing portions thereof is extremely difficult. Thus, in embodiments relating to objects having a certain speed, the present invention relates to MMS with one or more cameras 9(i) and two or more laser scanners 3 (j). Then, two point clouds of the same object but generated from two different laser scanners 3(j) are used to determine a short time trajectory of the moving object, which will be used to estimate the position of the object as a function of time. This estimate of the object's position over time will then be used to identify the object in the images collected by the camera 9(i) in the time period in which the laser scanner data is also collected.
The computer arrangement shown in fig. 4a is programmed to provide some functionality according to the present invention. The present invention encompasses at least two aspects: functionality related to non-moving objects and functionality related to moving objects. The functionality related to non-moving objects is outlined in the flowchart presented in fig. 4b and the functionality related to moving objects is outlined in the flowchart presented in fig. 14.
First, fig. 4b shows a flow chart in which the basic actions of the invention are shown as being performed on the computer arrangement 10. Before explaining the actions of fig. 4b in detail, it will be briefly described here.
In action 30, the computer arrangement 10 receives data from the range sensor 3(i) and the camera 9(j) as well as position/orientation and time data from the position/orientation determining device loaded on the MMS.
In act 32, the processor identifies a different object (such as a car as shown in fig. 5) in the data derived from the range sensor 3 (i).
In act 36, the computer arrangement 10 generates a mask for one or more objects based on data from the range sensor data.
In action 38, the computer arrangement 10 maps these masks on the corresponding objects in the picture as taken by the one or more cameras 9(j), taking into account the different times and positions of the measurements and also taking into account the movement of the objects.
In act 40, the computer arrangement 10 processes at least a portion of the picture within the boundaries indicated by the mask so as to render the privacy sensitive data within the object illegible.
Each of these actions is now explained in more detail. For this purpose, reference is made to fig. 5 to 9. Figure 5 shows a picture with some cars on it. The picture of fig. 5 is taken with the aid of the camera 9 (2). The first visible car (which is present within the dotted rectangle) has its details clearly visible. From a privacy perspective, such details must be removed. A program as run on the computer arrangement 10 identifies the object or, in an embodiment, the portion within the object that may relate to privacy sensitive information and that should be processed. In the case of fig. 5, the program identifies the entire car or license plate in the picture and processes the entire car or license plate to make it illegible. Fig. 6-9 explain how this action may be accomplished.
Fig. 6 shows a picture of the same scene as shown in fig. 5 but taken by the range sensor 3 (1). The scanning points of the range sensor comprise distance data relating to the distance between the range sensor 3(1) and the car.
As indicated in action 30 of fig. 4b, the data shown in fig. 5 and 6 is transmitted to the computer arrangement 10. In addition to the data shown in these fig. 5 and 6, the position and orientation data, which are time-dependent and obtained by the position determination/orientation means loaded on the MMS, are also transmitted to the computer arrangement 10. This also applies to the position of the camera 9(i) and the range sensor 3(j) relative to the position of the position determination/orientation means. The transmission of the data in act 30 to the computer arrangement 10 may be done via any known method, for example, via wireless transmission of data as obtained by MMS in the car 1 or via an intermediate medium (such as DVD, blu-ray disc, memory stick, etc.).
In act 32, an object is identified in the range sensor data. Preferably, the computer arrangement 10 applies a facade prediction method, i.e. a method to identify a location where a facade of a building is located. As indicated above, location data regarding where such building facades are obtained by using the method described and claimed in co-pending patent application PCT/NL 2006/050264. Of course, other techniques for identifying the location of a building facade may be used within the context of the present invention. If so, data relating to the building facade in the images taken by the camera 9(i) may be ignored for any further processing, e.g. to remove privacy sensitive data from these images. Data relating to the surface may also be removed in a straightforward manner. For example, the computer arrangement 10 may use the following facts: the MMS is moving on the ground and therefore it knows the position of the ground relative to the camera 9(i) and the range sensor 3 (j). Thus, the ground can be approximated as a plane over which the MMS is moved. If desired, a ramp measurement may also be considered. Fixed and moving objects between the mobile mapping system and such facades can be found in the point cloud obtained by a range sensor 3(j) which allows mapping of image pixels to appropriate spatial locations.
It can be seen that the processor can use the range sensor data to augment the photogrammetric methods used to determine the position of an object relative to the image points of the mobile mapping system. In this action, a moving object and a fixed object can be detected. In addition, the trajectory followed by such a moving object may be estimated.
Preferably, the program applies a pre-filtering process to the range sensor data to identify the object of interest. This is more suitable and efficient, requiring less processing time, as the range sensor data relating to an object will generally include less data than the camera image data relating to the same object. The pre-filtering process may be based on dynamic characteristics of the object, based on a size of the object, and/or location characteristics of the object. This may result in one or more selected objects requiring further processing. Such further processing will be performed on the camera image data of the same object and such further processing may include applying any known scene decomposition techniques to identify within the image portions that are relevant to text and/or human faces, for example. Such image detection techniques are known from the prior art and need not be explained further here. Examples are: clustering and/or RANSAC based search models.
The scan points of the range sensor contain distance data relating to the distance between the range sensor 3(1) and the car of figure 10. Based on this and other collected measurements, the computer arrangement 10 may process the scan points of the range sensor to identify the location of the car as a function of time and its velocity vector. As indicated in action 36, the computer arrangement 10 then determines a 3D trajectory of the object, parameterized by time, that associates positions with points from the images collected by the camera 9(i) to produce a mask that can be used to blur the car in the picture shown in fig. 10 (or to perform any other suitable image processing technique) to obtain a situation in which no more privacy-sensitive data is present in the picture of fig. 5. In the transformation to the position of the image of camera 9(2), the position of the mask will coincide with the position of the scanning point of the range sensor associated with the car. Such a mask is shown in fig. 7.
In action 38, the computer arrangement 10 maps this mask on the image shown in figure 5. In this way, the computer arrangement 10 uses the mask to establish the boundaries of the car in the image of fig. 5, as shown in fig. 8.
Within the boundary, in act 40, the computer arrangement 10 may perform any desired image processing technique to establish any desired result regarding the car being within the boundary. For example, such a car (or any other object) may be completely removed from the image if there are enough overlapping images or if there is other information about the scene such that the scene can be copied without seeing the object at all. Thus, in this embodiment, the computer arrangement 10 removes object image data in the image data and replaces the object image data in the scene captured by the camera 9(2) with data that would otherwise be visible to the camera 9(2) if the object were not otherwise present.
Another image processing technique is to blur the image associated with the object so that the private data is no longer visible. This is shown in fig. 9. However, other image processing techniques may be used that have the effect of making at least a portion of the image invisible/unrecognizable, such as defocusing the portion (see, e.g.:http://www.owlnet.rice.edu/~elec431/ projects95/lords/elec431.html) Or replace the portion with a standard image portion that does not show any privacy details. For example, a full car may be replaced with an image of a standard car without a license plate. Alternatively, if the license plate has been identified within the picture of the car, the license plate may be obscured only or replaced by a white board or by a board with a non-private license number. "pixelation" may be a suitable alternative technique to be used, which divides pixels in a bitmap into rectangular or circular cells and then recreates the image (source:http://www.leadtools.com/SDK/ Functions/Pixelate.htm)。
in an embodiment, the processor 11 is arranged to identify sub-objects within the image that comprise privacy sensitive data or other data to be removed using image processing analysis techniques. For example, a program running on the processor 11 may be arranged to identify a person's face by looking for facial features such as eyes, ears, nose, etc. Programs for doing this are commercially available, for example, in the Intel image processing library. Alternatively or additionally, the program running on the processor 11 may be arranged to identify portions having text. Or can also be usedPrograms are available on the market for doing this, for example, Microsoft (Microsoft) provides an image processing library that can be used here. Other technologies can be found on the following websites:http://en.wikipedia.org/wiki/face_detectionit mentions the following links:http://www.merl.com/reports/docs/TR2004-043.pdfhttp://www.robots.ox.ac.uk/~cvrg/ trinity2003/schneiderman_cvpr00.pdf
alternatively or additionally, the program running on the processor 11 may be arranged to identify portions having text. Programs for doing this are available on the market, for example,http://en.wikipedia.org/wiki/ Optical_character_recognitionan image processing library is provided that may be used herein. There do exist procedures to identify the license plate (the characters thereon) on a car. In this way, license plates and announcements may be identified that one wishes to remove or obscure. Privacy sensitive data that one wishes to remove or obscure may also be associated with a private telephone number.
It can be seen that the camera 9(j) will continuously take images of the surroundings of the MMS. In many cases, this will result in several successive images with overlapping portions of the scene. Thus, there may be several images with the same object therein. An advantage of the invention is that it reproduces information about the position of the object in all these images. The image processing technique need only be applied once to objects in one of these images (typically the image taken first in time) rather than to the same object in all those images. The processed object may then also be used in any of the other images. This reduces the processing time. If it is desired to identify N objects and process them in K successive images, the use of the invention results in an object analysis of these N objects only once, i.e. performed by the computer arrangement 10 on the range sensor data. The computer arrangement 10 need not repeat this operation for all K consecutive images.
In the example of fig. 5 to 9, the car is a stationary object parked along a street. The image shown in these figures is the right part of the image that has been taken by the front camera 9 (2). Once identified, the object is processed to modify information therein that should no longer be recognizable. Fig. 10 to 21 relate to another example in which there is an object that is moving relative to the fixed world. Also, the image shown in these figures is the right part of the image that has been taken by the front camera 9 (2). The examples relate to the walking person present in the scene, which is present in the picture taken by the camera 9(2) because the MMS is moving fast. For the purpose of processing at least the part of the image that is relevant to that person's face so that it will no longer be recognizable and thus protecting that person's privacy, it is desirable to identify that person.
Fig. 10 shows a picture of a person taken by the camera 9(2) on the front side of the car 1. The camera 9(2) takes a picture of a person earlier than any of the range sensors 3(1), 3 (2). For example, range sensor 3(1) has also sensed the person, but later in time. Fig. 11 shows scanning points detected by the range sensor 3 (1). Some of these scan points are associated with people who are also visible in fig. 10. A comparison of fig. 10 and 11 will show that there is a shift between the image of the person in fig. 10 and the points of the range sensor associated with the same person as shown in fig. 11.
The scanning points of the range sensor contain distance data relating to the distance between the range sensor 3(1) and the person. Based on this and other collected measurements, the computer arrangement 10 may process the scanning points of the range sensor to identify the position of the person. For a fixed object, the computer arrangement 10 may then adjust the laser scanner point of the object to the same position as seen in the image collected by the camera 9(2) to create a mask that can be used to wipe a person from the picture shown in fig. 10 (or perform any other suitable image processing technique) to obtain a situation where no more privacy-sensitive data is present in the picture of fig. 10. When transformed into an image of camera 9(2), the position of the mask will coincide with the position of the scanning point of the range sensor associated with the person. Such a mask is shown in fig. 12. However, if the object, in this case a person, was moving, the computer arrangement 10 will erase the information about the person based on the scanning points of the range sensor and a part of the person, i.e. the left part, will remain on the image. This is shown in fig. 13. This is because the person has moved between the time of the camera image and the time of the laser scan.
As can be seen, systems have been presented with a camera that is collocated and synchronized with a laser scanner such that it provides a direct correlation between range sensor data and image data. For example, such systems have been displayed on the following websites:http://www.imk.fhg.de/sixcms/media.php/130/3d_cameng.pdfandhttp://www.3 dvsystems.com/technology/3D%20Camera%20for%20Gaming-l.pdf. Other systems on the market include camera images augmented by z-distance combining the image from a special infrared camera with data obtained from a normal CCD sensor. However, such systems have low resolution and high price. For moving objects, the present invention involves a much more general solution and still obtains a proper mapping of the point cloud from the range sensor to the image from the camera by adjusting for the motion of the object using the estimated short-time trajectory.
It is therefore desirable not only to generate such a mask based on the data of the range sensors but also to determine motion trajectory data defining the amount and direction of movement of an object visible in at least one image taken by at least one of the cameras 9(i) between the time the object is taken by the camera and the time the object is scanned by one of the range sensors 3 (j). How such a motion vector can be determined by using at least two range sensors 9(i) will be explained below. In the following explanation, it is assumed that the motion of the object is linear in view of the short timescales involved. However, the present invention is not limited to this embodiment. Alternatively, the motion of the object may be estimated as a non-linear trajectory, wherein such a trajectory is determined, for example, by scans produced by more than two range sensors 3 (j).
First, FIG. 14 shows a flow chart in which the basic acts of the present invention are shown as being performed on the computer arrangement 10. The method acts shown in fig. 14 are substantially the same as in fig. 4 b. The difference is method action 34 between actions 32 and 36. In act 34, motion vectors of one or more moving objects are calculated by the computer arrangement 10. Furthermore, in an action 38 the computer arrangement 10 maps the mask generated in action 36 on the corresponding moving object in the picture taken by the one or more cameras 9(j), taking into account the different positions of the measurements and also taking into account the movement of the object.
Now, action 34 (among others) is explained in more detail.
Actions 30 and 32 have been explained above with reference to fig. 4 b.
For the motion trajectory of the computational object performed in act 34, the following assumptions are made: between the time that a moving object is captured by one of the cameras 3(j) and sensed by two or more of the range sensors 9(i), the moving object does not change its speed and direction of travel to a large extent. Thus, it is assumed that the motion vector is substantially constant during said time. It can be said that "short-time trajectory approximation" is used, which is an excellent approximation in view of the short time periods involved between successive pictures and scans. Thus, the magnitude and direction of the motion vector can be estimated by determining the motion vector from successive scans from two different range sensors 9 (i).
It can be seen that in alternative embodiments, three or more range sensors may be used. If so, more than two range sensor scans may be used to identify the movement of the object, resulting in a higher order approximation of the object movement trajectory than would be obtained by using two range sensors.
When the object is moving and is scanned first by the range sensor 3(1) and then by the range sensor 3(2), then the position of the object should be different between the two scans. This is schematically shown in fig. 15a, 15b, 15 c. In fig. 15a, at time t1, the object is seen by range sensor 3 (1). In fig. 15b, at time t2 later than time t1, the object is not seen by any of the range sensors 3(i) but is only in the field of view of the camera 9 (1). In fig. 15c, at time t3 later than time t2, the object is in the field of view of the range sensor 3 (2).
It can be seen that fig. 15a and 15c display a "location uncertainty" region, which indicates that the range sensor 3(i) may not identify the location of the object with 100% accuracy. This is due to the fact that a range sensor, e.g. a laser scanner, needs to scan the object several times to identify it. This takes some time. During this time period, the object may move itself. As will be explained below with reference to fig. 22, a fast moving object can be detected to be larger than it actually is. The use of the concept of centroids to identify movement can result in errors, which, however, can be corrected by shape correction techniques as will be explained below.
It will be explained below how the computer arrangement 10 may use range sensor scan data to derive a motion vector of an object. After deriving this, the computer arrangement 10 may calculate the velocity of the object and the position of the object. The problem is assumed to be 2D only, since one can assume that most objects on the road move only on a plane. Of course, the principles of the present invention may be extended to include "flying" objects, i.e., objects that move but do not contact the ground.
A first scene scanned by the range sensor 3(1) is compared with a second scene scanned by the range sensor 3 (2). If an object has moved, it cannot be in the same location in the second scene as in the first scene.
The computer arrangement 10 may perform a scan comparing two range sensor point clouds in the following manner. First, the computer arrangement 10 calculates two sets of point clouds, representing the difference between the sets of point clouds:
DIFF1=scan1-scan2;
DIFF2=scan2-scan1;
wherein: scan2 ═ point cloud in the scans of the second range sensor 3(2)
scan1 ═ point cloud in the scan of the first range sensor 3(1)
It can be seen that prior to performing this set operation, points in the point cloud are associated with their correct locations. Then, performing the equation renders two sets of point clouds representing the moving object at two different time instances. Since in practice the moving object is spatially separate, the corresponding parts of the two sets of point clouds are also spatially separate, which allows the computer arrangement 10 to efficiently decompose it into sets of points representing the separate moving object. By applying these operations in both DIFF1 and DIFF2, the computer arrangement 10 obtains two sets of point clouds representing the same object.
Any decomposition technique known to those skilled in the art may be applied. In general, the following techniques may be used. The range sensor scan of the first range sensor 3(1) is divided into individual point clouds, where each individual point cloud is associated with a single object. To do this, points with similar layout on the planar absolute world coordinates and within a range distance from each other are aggregated (clustered) together, the range distance is determined adaptively for each group of points, depending on the sequence position in the scan and/or the distance to the range sensor, and since moving objects will be particularly separated, the average distance to points belonging to other groups is significantly different.
The detected objects in the first and second scenes produced by the decomposition method are analyzed by comparing object characteristics in the two scenes to find the same object in the two scenes. Each group of points is analyzed with respect to its shape. The computer arrangement 10 calculates for each group whether it fits with a certain basic shape, such as a box, a cylinder, a sphere, a plane, etc. For each group, the group is replaced with such a basic shape and the computer arrangement 10 stores basic features of the basic shape, such as height, diameter, width, etc. The computer arrangement 10 then repeats the same procedure for scanning by the second range sensor 3 (2). The computer arrangement 10 is now able to compare the detected objects in the two scans by comparing the basic shapes present in the two scans and then match or fit the different objects in the two scans.
The computer arrangement 10 may use various known techniques to determine the matching pairs in those two sets. For example, modified Hausdorff distance measurements extended to 3D may be applied (see, e.g., forhttp://citeseer.ist.psu.edu/cache/ papers/cs2/180/http:zSzzSzwww.cse.msu.eduzSzpripzS zFileszSzDubuissonJain.pdf/ a-modified-hausdorff-distance.pdf)。
Each object in the scene has characteristic points that are present in all scenes. For example, a suitable characteristic point of an object is the centroid of the shape bounding the object, which can be calculated based on the subset of point clouds identified as belonging to such an object. Some shape characteristics of the object indicating, for example, the scale of the object may be added as characteristic points. When the object is a person, the person may be approximated by a cylinder (with the diameter and height corresponding to the average size of the human body). In a group of people, people may be so close to each other that the computer arrangement 10 cannot separate individuals from the group of people. This population of people can then be analyzed as a single object. In this single subject, individual faces cannot be detected. Still, privacy sensitive data may need to be removed from the image showing such a group of people. This can be solved by blurring the entire population. Alternatively, portions of the image associated with the population may be pixilated. As a further alternative, even in a crowd, face portions may be identified in images associated with the crowd using the face detection techniques mentioned above, followed by blurring those faces by image processing techniques, replacing those faces with some sort of standard face picture, and so on. Manual image processing may alternatively be performed if automatic image processing does not produce acceptable results.
In most cases, there is no ambiguity as to matching the basic shape detected in the first scan by the range sensor 3(1) with the basic shape detected in the second scan by the range sensor 3 (2). If there is ambiguity, it can be solved by extending this approach by means of object recognition techniques applied to candidates for matching objects in image portions (regions of interest) and comparing image properties in the color space of the ROI for each object in each of the images made by the camera 9 (j). Manual intervention may also be applied if ambiguity remains.
The computer arrangement 10 then determines the absolute position of the object, i.e. the position of the centroid in both scenes. How the centroid can be calculated will be explained below with reference to fig. 16 and 17.
The absolute position of the object can be determined in any known way from position data also received from the MMS, linked to the time at which the camera 9(j) takes a picture and the range sensor 3(i) scans, as known to the person skilled in the art. The calculation of the absolute position from the received MMS data may be performed in the manner explained in detail in international patent application PCT/NL 2006/000552.
The computer arrangement 10 may calculate the motion vector of the object as the difference between two absolute positions of the same object in the first scene and the second scene. The position of an object in a scene may be related to the position of the centroid of the object in the scene. The computer arrangement 10 may use the motion vector to calculate the position of an object at any time, on the assumption that the speed of the object does not change rapidly to an assumption that it is valid in the time period t1 to t3 and also at times close to this period. As seen above, when there are several moving objects in a scene, some ambiguity may result in an analysis of which centroid is related to which object. When the calculation of the motion vector is started, it is assumed that this ambiguity has been resolved and all objects have been correctly identified, e.g. by using point cloud characteristics or the like, as explained above.
The computer arrangement 10 calculates the position (x) from fig. 15b while using the following calculation2,y2)。
The computer arrangement 10 uses the following data:
(x1,y1) The absolute position of the object at time t1, which is sensed by the computer arrangement 10 according to the slave range
The device 3(1) receives the position data (which is relative to the vehicle 1) and the time of the vehicle 1
Absolute position data at t1 (which is calculated by the position determining means shown in fig. 1);
the use of bits as shown in FIG. 2 is described in International patent application PCT/NL2006/000552
A suitable method of the position determination device calculating the position of the car 1, however, other methods may be used.
(x3,y3) The absolute position of the object at time t3, which is sensed by the computer arrangement 10 according to the slave range
The device 3(2) receives the position data (which is relative to the vehicle 1) and the time of the vehicle 1
Absolute position data at t3 (which is calculated by the position determining means shown in fig. 1);
t1the range sensor 3(1) senses the time of the object; this time has been recorded by the microprocessor in the car 1 and
also stored later in the memory of the computer arrangement 10;
t2time when the camera 9(1) takes the subject; this time has been recorded by the microprocessor in the car 1 and is also later on
Has been stored in the memory of the computer arrangement 10;
t3the range sensor 3(2) senses the time of the object; this time has been recorded by the microprocessor in the car 1 and
also stored later in the memory of the computer arrangement 10;
in the calculation, it has been assumed that the speed of the car 1 is substantially constant during the period t1 to t 3.
The computer arrangement 10 calculates the position (x) of the object at time t2 as follows2,y2). It starts with the commonly known equation for calculating the speed V of an object from the travel distance Δ s during time Δ t:
the velocity V can be considered as a motion vector associated with the object. Decomposing V into x, y components (V)x,Vy) Decomposing Δ s into x, y components Δ sx,ΔsyAnd reproduced with t3-t1 instead of Δ t:
from this, (x) can be derived2,y2) The following are:
position (x) calculated abovei,yi) Associated with the centroid of the object of interest. The following assumptions are made: the mass is evenly distributed over the object of interest and the shape of the object is substantially unchanged. If this is not the case, then the calculated centroid is actually the center of the object. This is not important for the purpose of calculating motion vectorsA preparation method comprises the following steps.
It can be said that the range sensor 3(i) is associated with a particular object at the time tiAll scan data of the correlation form a "point cloud" of measurement points. FIG. 16 shows such a point cloud. Each scanned point (obtained by the decomposition method explained above and executed by the computer arrangement 10) relating to one object is by means of a distance r from an arbitrary origin (e.g. defined by a position defined for the car 1)iIndicated by the small circles. The computer arrangement 10 calculates a centroid for each such point cloud. For an object like a person, the computer arrangement 10 also calculates a cylinder approximating that person, as shown in fig. 12. Other objects may be approximated by other shapes. The shape together with the centroid form a description of the object.
Geometric centroid of points of such a cloudIs derived from the following formula
It can be seen that the range sensor 3(i) can be viewed in different directions, as indicated schematically in fig. 18. In fig. 18, a dotted circle indicates an object. The range sensor 3(1) observes in a direction having an angle α with respect to the viewing direction of the camera 9(1), and the range sensor 3(2) observes in a direction having an angle β with respect to the viewing direction of the camera 9 (1). It can be shown that the computer arrangement 10 can calculate the position of the object most accurately from the data received from the range sensor 3(i) when both angles α, β are 0 °. However, the present invention is not limited to this value. In fact, to accurately calculate the velocity, the distance indicated by a + d + b should be as large as possible while keeping in line with the assumption that the time of the trajectory is short.
In act 36, the computer arrangement 10 generates a mask defined by at least a portion of the scanning points within one cloud associated with one object. FIG. 19 shows this mask for the pixel cloud associated with the person shown in FIG. 10. The mask is derived from a cloud of pixels associated with objects associated with the same object in the image. The mask used has a fixed shape, which is valid for those objects that do not substantially change their shape even if they are moving during the time scale involved, which proves to be a suitable assumption for those cases where the object is only moving slowly. If the object is moving rapidly, its shape detected by the range sensor 3(i) should first be corrected by the computer arrangement 10. How this may be done will be explained below with reference to fig. 22.
In action 38, the computer arrangement 10 maps this mask to a position (x) in the image as shown in FIG. 10 while using the calculated motion vector2,y2) The above. In this way, the computer arrangement 10 uses the mask to establish the boundaries of the object in the image of figure 10, as shown in figure 20.
Within the boundary line, in act 40, the computer arrangement 10 may perform any desired image processing technique to establish any desired result with respect to the object within the boundary line.
As indicated above, one such image processing technique is to blur the image associated with the object so that the private data is no longer visible. This is shown in fig. 21. However, also other image processing techniques may be used having the effect of making at least a part of the image invisible/unrecognizable, such as defocusing said part (see e.g.:http://www.owlnet.rice.edu/~elec431/projects95/lords/elec431.html) Or replace the portion with a standard image portion that does not show any privacy details. For example, a standard face of a puppet may be substituted for a human face. "pixelation" may be a suitable technique to be used, as already commented on in relation to fig. 4 b.
In an embodiment, the processor 11 is arranged to identify within the mobile object sub-objects comprising privacy sensitive data or other data to be removed. For example, a program running on the processor 11 may be arranged to identify a person's face by looking for facial features such as eyes, ears, nose, etc. Procedures for doing this are commercially available. Microsoft provides an image processing library that can be used herein. Other technologies can be found on the following websites:http://en.wikipedia.org/wiki/face_detectionit mentions the following links:http://www.merl.com/reports/docs/TR2004-043.pdfhttp://www.robots.ox.ac.uk/~cvrg/ trinity2003/schneiderman_cvpr00.pdf
alternatively or additionally, the program running on the processor 11 may be arranged to identify portions having text. Programs for doing this are available on the market, for example,http://en.wikipedia.org/wiki/ Optical_character_recognitionan image processing library is provided that may be used herein. There do exist procedures to identify the license plate (the characters thereon) on a car. For example, such a procedure is used on road segments with speed limitations. In this way, license plates, telephone numbers and announcements may be identified that one wishes to remove or obscure.
So far, how to handle non-moving objects and moving objects has been discussed. However, when the moving object moves rapidly relative to the speed of the MMS itself, the size of the object as observed by the range sensor 3(j) will deviate from its actual size. For example, a car exceeding or being exceeded by an MMS continues to be scanned by the range sensor 3(j) for a longer period of time than would otherwise be the case if it were stationary or nearly stationary. Thus, this car appears longer than it really is. If the car is moving in the opposite direction, the opposite effect occurs. For pictures taken by the camera 9(i), this is not a problem since the camera has a high shutter speed: the picture will show the actual size of the car.
The difference between the actual size of the car and the size of the car in the point cloud observed by the range sensor 3(j) may result in too little or too much obscuration derived from the range sensor data. Therefore, a program running on the computer arrangement 10 has to compensate for this observed size error. This will be done in act 36.
Fig. 22 explains how the speed and size and, optionally, the shape of a fast moving object can be determined. The upper part of the figure relates to MMS passing over the car, while the lower part of the figure relates to car passing in the opposite direction.
The observed length of the moving object is determined by the scanning time t of one of the range sensorsscanningAnd (4) determining. The upper part of fig. 22 shows a case in which the range sensor 3(1) detects the car for the first time. The scanning time of the range sensor 3(1) is the time between the first detection and the last detection (the last detection is not shown in fig. 22) by the range sensor 3 (1).
Actual speed V of the vehiclerealIs defined by the formula:
Vreal=VMMS+Vrelative
wherein:
Vrealactual speed of the vehicle
VMMSSpeed of MMS, determined from position determining device data
VrelativeRelative speed of the car with respect to the speed of the MMS, which is calculated while using the same formula for calculating the speed of the person of fig. 10
Observed length L of a motor vehicle calculated from range sensor data and position determination device dataobservedDerived from the following formula:
Lobserved=VMMS·tscanning
however, the actual length L of the automobilerealDerived from the following formula:
Lreal=Lobserved-Lcorr
wherein L iscorrCorrected for the fact that the car has its own speed and equal:
Lcorr=Vreal·tscanning
thus, the actual length of the car is given by:
Lreal=(VMMS-Vreal)·tscanning
it should be noted that in this latter equation, the actual speed V of the vehiclerealShould be subtracted from the speed of the MMS if the car is traveling in the same direction as the MMS and should be added if it is traveling in the opposite direction. Whether the vehicle is traveling in the same direction or in the opposite direction is derived from the speed calculation on the basis of the data of the two range sensors 3(1), 3 (2).
Once the actual length L of the vehicle has been establishedrealIn act 36, the computer arrangement 10 takes this into account while calculating the mask. That is, for example, the computer arrangement multiplies the length of the mask by a factor F, which is equal to:
F=Lreal/Lobserved
the mask so obtained is used in act 38.
Overview.
As explained above, the present invention relates to determining the location of objects, such as facades, road equipment, sidewalks, and vehicles, within images taken by one or more digital cameras on a mobile automobile (e.g., MMS). One or more range sensors arranged on the automobile are used to generate masks that can be used to identify objects in such an image and then perform image processing actions on such objects or portions thereof. The size and trajectory of objects that may move in an image may be approximated using an arrangement that includes two or more range sensors (such as laser scanners or other range sensors) attached to such an MMS, which produce scans of the same scene as captured by a camera mounted on the MMS. When using, for example, any known building/facade detection algorithm and identifying objects within the picture taken by the camera (while using range sensor data) and then applying one or more image processing filters to such objects, the following advantages can be achieved:
1. those portions are protected by changing the resolution or other image visual characteristics of privacy sensitive or other undesirable portions on the image.
2. The image portions associated with an object or portion thereof in one image may be used in other MMS-collected images displaying the same object after processing of undesirable data about the same object therein. Thus, any image processing action performed on the object need only be applied once and need not be reapplied in a different image.
3. Static objects in the range sensor points can be distinguished from slow and fast moving objects. Its actual length and actual speed may be determined.
4. Laser scan detection of the camera image may be adjusted to accurately obscure objects in the image based on the length and velocity values determined and the time difference between laser scanning and image capture.
5. Undesired objects in the region between the car and the facade may be removed and information from another image or from the current image may be processed to replace the foreground image.

Claims (25)

1. A computer arrangement (10) comprising a processor (11) and a memory (12; 13; 14; 15) connected to the processor, the memory comprising a computer program comprising data and instructions arranged to allow the processor (11) to:
receiving time and location data from a location determining device onboard a mobile system and first range sensor data from at least a first range sensor (3(1)) onboard the mobile system and image data from at least one camera (9(j)) onboard the mobile system;
identifying a first point cloud within the first range sensor data associated with at least one object;
generating a mask associated with the object and based on the first point cloud;
mapping the mask on object image data relating to the same object present in the image data from the at least one camera (9 (j));
a predetermined image processing technique is performed on at least a portion of the subject image data.
2. Computer arrangement (10) according to claim 1, wherein the computer program is arranged to allow the processor (11) to:
receiving second range sensor data from a second range sensor (3(2)) loaded on the mobile system;
identifying a second point cloud within the second range sensor data that is related to the same at least one object;
calculating a motion vector of the at least one object according to the first point cloud and the second point cloud;
mapping the mask onto the subject image data while using the motion vector.
3. Computer arrangement (10) according to claim 2, wherein the computer program is arranged to allow the processor (11) to:
calculating an actual size of the object based on the first range sensor data and second range sensor data;
the actual length is used while the mask is being generated.
4. Computer arrangement (10) according to claim 3, wherein the computer program is arranged to allow the processor (11) to:
calculating an observed size of the object based on one of the first range sensor data and second range sensor data;
calculating the mask based on the observed length and the actual length.
5. Computer arrangement according to claim 2 or 3, wherein the processor (11) calculates the motion vector on the assumption that the speed and direction of motion of any object detected within the range sensor data from the at least first and second range sensors (3(i)) is substantially constant.
6. Computer arrangement according to any of the claims 2-5, wherein said identifying the second point cloud comprises distinguishing the at least one object from a reference object fixed to the earth.
7. Computer arrangement according to any of the preceding claims, wherein said identifying said first point cloud comprises distinguishing said at least one object from a reference object fixed to the earth.
8. Computer arrangement according to claim 6 or 7, wherein the reference object is a building.
9. Computer arrangement according to any of the preceding claims, wherein said predetermined image processing technique comprises at least one of blurring said at least one portion, defocusing said at least one portion and replacing said at least one portion with predetermined image data.
10. Computer arrangement according to any of the preceding claims, wherein the computer program is arranged to allow the processor (11) to identify the at least one portion by using at least one of object recognition techniques and character recognition techniques.
11. Computer arrangement according to any of the preceding claims, wherein said at least one portion comprises privacy sensitive data.
12. Computer arrangement according to any of the preceding claims, wherein the object image data belongs to a certain scene, the predetermined image processing technique comprising removing the object image data in the image data and replacing the object image data in the scene with data that would otherwise be visible in the scene if the object were not otherwise present.
13. Computer arrangement according to any of the preceding claims, wherein the image data relates to a plurality of images, each of the plurality of images displaying the same object, and the processor (11) is arranged to generate a processed object image by performing the action of the predetermined image processing technique on at least the portion of the object image data in one of the plurality of images, and to replace the object in other images of the plurality of images with the processed object image.
14. A data processing system comprising a computer arrangement according to any one of the preceding claims and a mobile system comprising a position determining device for providing the time and position and orientation data, at least a first range sensor (3(i)) for providing the first range sensor data and at least one camera (9(j)) for providing the image data.
15. A method of mapping first range sensor data from a first range sensor (3(1)) to image data from at least one camera (9(j)), both the first range sensor (3(1)) and the at least one camera (9(j)) being positioned on a mobile system in a fixed relationship to each other, the method comprising:
receiving time and location data from a location determining device loaded on the mobile system and the first range sensor data from the first range sensor (3(1)) loaded on the mobile system and the image data from the at least one camera (9(j)) loaded on the mobile system;
identifying a first point cloud within the first range sensor data associated with at least one object;
generating a mask associated with the object and based on the first point cloud;
mapping the mask onto object image data relating to the same object present in the image data from the at least one camera (9 (j));
a predetermined image processing technique is performed on at least a portion of the subject image data.
16. The method of claim 15, wherein the method includes:
receiving second range sensor data from a second range sensor (3(2)) loaded on the mobile system;
identifying a second point cloud within the second range sensor data that is related to the same at least one object;
calculating a motion vector of the at least one object according to the first point cloud and the second point cloud;
the mask is mapped onto the subject image data using the motion vector.
17. The method of claim 16, wherein the method includes:
calculating an actual size of the object based on the first range sensor data and second range sensor data;
the actual size is used while the mask is being generated.
18. A computer program product comprising data and instructions that can be loaded by a computer arrangement, allowing said computer arrangement to perform any of the methods according to claims 15-17.
19. A data carrier provided with a computer program product according to claim 18.
20. A computer arrangement (10) comprising a processor (11) and a memory (12; 13; 14; 15) connected to the processor, the memory comprising a computer program comprising data and instructions arranged to allow the processor (11) to:
receiving time and position data from a position determining device onboard a mobile system, first range sensor data from at least a first range sensor (3(1)) onboard the mobile system, and second range sensor data from a second range sensor (3(2)) onboard the mobile system;
identifying a first point cloud within the first range sensor data associated with at least one object;
identifying a second point cloud within the second range sensor data that is related to the same at least one object;
calculating a motion vector of the at least one object from the first point cloud and the second point cloud.
21. Computer arrangement according to claim 20, wherein the computer program product is arranged to allow the processor (11) to:
calculating the motion vector while calculating a first center of mass in the first point cloud and a second center of mass in the second point cloud, an
Establishing a path length between the first centroid and the second centroid.
22. Computer arrangement according to claim 20 or 21, wherein the computer program is arranged to allow the processor (11) to:
calculating an actual length of the object based on the first range sensor data and second range sensor data.
23. A method of computing a motion vector for an object, comprising:
receiving time and position data from a position determining device onboard a mobile system, first range sensor data from at least a first range sensor (3(1)) onboard the mobile system, and second range sensor data from a second range sensor (3(2)) onboard the mobile system;
identifying a first point cloud within the first range sensor data associated with at least one object;
identifying a second point cloud within the second range sensor data that is related to the same at least one object;
calculating a motion vector of the at least one object from the first point cloud and the second point cloud.
24. A computer program product comprising data and instructions which are loadable by a computer arrangement, so as to allow said computer arrangement to perform the method according to claim 23.
25. A data carrier provided with a computer program product according to claim 24.
HK11102326.8A 2007-11-07 Method of and arrangement for mapping range sensor data on image sensor data HK1148345A (en)

Publications (1)

Publication Number Publication Date
HK1148345A true HK1148345A (en) 2011-09-02

Family

ID=

Similar Documents

Publication Publication Date Title
EP2261604B1 (en) Computer arrangement for and method of calculating motion vectors using range sensor data
EP2327055B1 (en) Method of and arrangement for blurring an image
US9843810B2 (en) Method of using laser scanned point clouds to create selective compression masks
JP5714940B2 (en) Moving body position measuring device
US20100086174A1 (en) Method of and apparatus for producing road information
US20170132806A1 (en) System and method for augmented reality and virtual reality applications
JP5267330B2 (en) Image processing apparatus and method
JP2011134207A (en) Drive recorder and map generation system
CN113807282A (en) A data processing method, device and readable storage medium
HK1148345A (en) Method of and arrangement for mapping range sensor data on image sensor data
HK1151615A (en) Method of and arrangement for blurring an image
HK1152815A (en) Method of using laser scanned point clouds to create selective compression masks
HK1138094A (en) Method of and apparatus for producing road information
HK1133728A (en) Method and apparatus for detecting objects from terrestrial based mobile mapping data