GB2635771A - LIDAR data processing - Google Patents
LIDAR data processing Download PDFInfo
- Publication number
- GB2635771A GB2635771A GB2318079.7A GB202318079A GB2635771A GB 2635771 A GB2635771 A GB 2635771A GB 202318079 A GB202318079 A GB 202318079A GB 2635771 A GB2635771 A GB 2635771A
- Authority
- GB
- United Kingdom
- Prior art keywords
- blank
- data point
- data
- moving average
- time series
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/66—Tracking systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
- G01S2007/4975—Means for monitoring or calibrating of sensor obstruction by, e.g. dirt- or ice-coating, e.g. by reflection measurement on front-screen
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
Techniques are presented for processing LIDAR data from a sensor mounted for example on a gantry over a roadway to identify “active blank” data points (caused by a low or non-reflective object) or “inactive blank” (caused by a faulty or blocked sensor). The process taking a time series of datapoints from a lidar sensor and for each time series classifying each data point as blank or non-blank, blank being no detection and non-blank being a detection. The system determining a first rolling average being a measure of how often a data point is classified as blank and a second rolling average being a measure of how often a data point is classified as non-blank. Categorising each blank data point by comparing the two rolling averages dependent on the first rolling average exceeding a threshold relative to the second rolling average, the process classifying “active blank” as data points where the threshold is exceeded and “inactive blank” when it is not exceeded.
Description
LIDAR DATA PROCESSING
The present invention relates to processing of data from LIDAR sensors, particularly sensors disposed to detect vehicles travelling along a roadway.
BACKGROUND TO THE INVENTION
LIDAR ("light detection and ranging") sensors can be used to accurately determine the range of a target, i.e. the distance between the sensor and the target. In particular, these sensors are often used mounted on for example a gantry above a road, to detect vehicles as they travel along the road.
This arrangement can be used for example on a toll road. The LIDAR scanners are mounted to detect vehicles, and in particular distinguish for example, cars, vans, lorries, buses, which may be charged different tolls. The scanners can also be used to accurately direct a camera which can then photograph a licence plate of each vehicle. This requires processing of the LIDAR sensor data in near real time, so that a photograph can be taken as soon as the rear end of the vehicle is detected at a relevant point.
A LIDAR sensor typically scans along a line, and outputs data as to the range of an object from the sensor at each point along the line. The sensor may scan along multiple lines concurrently, for example three lines separated by a few degrees (typically around 12 degrees). LIDAR sensors are also available (and applicable to this invention) which gather data points not just along straight lines but in all directions
within a field of view.
The LIDAR sensor works essentially by bouncing a beam of light off the target and timing how long it takes to return to the sensor to determine a range. It relies therefore on being able to detect the reflection from the target object. In some cases, the object may have low reflectivity and, as a result, the return signal is not detected. This results in a "blank" signal, i.e. unknown range. However, "blank" signals can also occur due to faults with the LIDAR sensor, for example damage or dirt on a part of the lens.
Blank signals due to poor reflectivity are problematic since essentially the sensor and follow-on processing does not know whether an object (e.g. a vehicle) is present or not. One common low-reflectivity surface is the top of a trailer of an articulated lorry, and the particular problem often associated with this is that the LIDAR sensor will detect the cab of the articulated vehicle, and may well detect the back of the trailer, but will have a series of blank readings in between. This may be detected as two vehicles rather than one long vehicle.
LIDAR sensors are often used for example to direct cameras to take photographs of rear numberplates of vehicles. Accordingly the data needs to be processed in near real-time in order to capture every vehicle. Where a vehicle is detected and then blank readings occur, the system potentially needs to decide whether or not the vehicle has passed the sensor or not, without waiting to see if (for example) the back of a long vehicle is detected after a series of blank readings.
It is an object of the present invention to solve this problem, and in particular to classify blank pixels as either "active" blank pixels, likely caused due to a low-reflectivity surface, or "inactive" blank pixels, likely caused by a sensor fault.
STATEMENT OF INVENTION
According to the present invention there is provided a method of processing data from a LIDAR sensor directed at a roadway, the data from the LIDAR sensor being a time series associated with each of a plurality of ranging directions, each data point in a time series representing a range from the sensor in the associated ranging direction at a particular time, the method comprising: for each time series: classifying each data point as: "blank", where the data point indicates that the return signal from the LIDAR sensor has not been detected; and/or "non-blank", where the data point indicates a range from the sensor, maintaining a first moving average associated with the time series wherein the first moving average is a measure of how often a data point in the time series has been classified as blank; maintaining at least a second moving average associated with the time series wherein the second moving average is a measure of how often a data point in the time series has been classified as non-blank; for each blank data point in the time series, making a comparison between the first and second moving averages at the time step associated with the blank data point, and classifying the blank data point as an "active blank" or an "inactive blank" depending on whether the first moving average exceeds a threshold condition relative to the second moving average.
Preferably, the non-blank data points are further classified as: "empty road", where the data point indicates a range consistent with the LIDAR sensor detecting the road surface; and/or "active", where the data point indicates a range consistent with a vehicle present on the road.
By classifying blank data points as "active" or "inactive" in this way, the data from the sensor can be better understood. An inactive blank is simply treated as missing data, however an active blank is likely to indicate the presence of an object, for example a vehicle, with a low reflectivity surface. In post-processing of data, the range associated with active blanks can be estimated for example by interpolating between spatially and/or temporally adjacent data, leading to better results. Moreover, where "active blank" signals are being received, a downstream system knows that a vehicle is still likely to be present under the sensor. Accordingly it will delay "completing" the vehicle and, for example, directing a camera to photograph the rear of it. Note that the "active blank" is associated with a particular ranging direction (i.e. a fairly precise angle from the sensor) and so knowing that there is an object in that direction (even though the range to its surface is unknown) is useful information.
In this way, a detected object (i.e. a series of "active" data points), followed by a series of "active blank" data points, followed by series of "active" data points, is correctly interpreted as a single vehicle, for example a lorry, rather than two vehicles. This happens in real time, since the "active blank" data points are an indication that a vehicle continues to be present under the sensor. A follow-on action, for example a camera taking a photograph of the rear numberplate, can be correctly triggered to coincide with the rear of the vehicle passing the relevant point.
The classification as to "empty road" or "active" is done simply by comparing the measured range at the data point with a threshold. Since the road surface is a known distance from the sensor, ranges within a band of ranges around that distance can be classified as "empty road" and ranges significantly shorter than that (i.e. less than the known distance to the road surface by at least a predetermined difference) can be classified as "active".
Each data point represents a range from the sensor in the associated ranging direction at the relevant time. However, it may be convenient to convert the range (from the sensor) to a height (above the road) at an early point in the processing. This can be done according to known techniques. The classification as to "empty road" or "active" can then be based on a height threshold.
The first moving average may be for example a simple moving average, a weighted moving average (with more recent data being given greater weights), an adaptive moving average (where weights are adapted based on volatility, or a Hull moving average which is aimed at reducing lag and improving the smoothness of the average. However, in a preferred embodiment an exponential moving average is used. Whichever moving average is used, it is typically taken over a value P associated with a data point at timestep N, as described in more detail below. In this example, the exponential moving average is calculated as: EMA1N = aPN + (1 -a)EMA1N_1 Typically a value of a may be around 5 x 10-6 (where a timestep is 5ms), as this is found to provide a good balance between fast update speed and robustness to noise.
In particular, the value of a should be selected to provide more than enough time for a vehicle to pass through the site, even in a very heavy traffic jam where vehicles may be stopped for a period. This will obviously depend on the nature of the road in which the particular embodiment is being installed, but around 15 minutes is often suitable (note that there 180,000 5ms timesteps in 15 minutes, and the inverse of the proposed a of 5 x 10-6 is 200,000, which is 16 minutes 40 seconds in 5ms timesteps). The idea is to avoid a pixel being identified as an "inactive blank" simply due to a nonreflective vehicle being stopped in front of the sensor. However, it is also possible for a sensor to suddenly (but perhaps temporarily) become faulty and in that case the pixel should be identified as an "inactive blank" within a reasonable period of time.
For the first moving average, the value P associated with the data point may be one value for a blank data point, and a different value for a non-blank ("active" or "empty road") data point. For example the value of P may be 1 for blank data points, and 0 for non-blank data points. Accordingly, the moving average is closer to 1 when there have often been blank data points in the time series, and is closer to 0 when the data points have usually been non-blank.
In a simple case, the second moving average may be calculated in the same way, but where inverse values are given for the value associated with each data point. For example, the second moving average may be an exponential moving average calculated as: EMA2N = aQN + (1 -a)EMA2N-1 Where, for the second moving average, Q = 1 for a non-blank data point and Q = 0 for a blank data point.
Preferably, the value of a is the same for the first and second moving averages.
For each blank data point B at timestep N, the blank may be classified as an active blank or an inactive blank as follows: Where MAIN < MA2N, classify as an "active blank"; otherwise, classify as an "inactive blank" (MAIN designates the first moving average at timestep N, and MA2N designates the second moving average at timestep N. In the examples above the first and second moving averages are exponential moving averages EMA1N, EMA2N, but in other embodiments may be other types of moving averages.) /3 is a weighting factor which may be set at, for example, about in embodiments.
In some embodiments, the second moving average may be calculated separately for the different types of non-blank data points ("active" or "empty road"). For example: EMA_activeN = aRN + (1 -a)EMA_activeN_i where R = 1 for "active" data points and R = 0 for all other data points ("empty road" data points and "blank" data points). And:
EMA_emptyN = aSN + (1 -a)EMA_emptyN-1 where S = 1 for "empty road" data points and S = 0 for all other data points ("active" data points and "blank" data points).
Again, in this example the moving averages EMA_activeN and EMA_emptyN are exponential moving averages, but other types of moving averages may be used.
In this embodiment the comparison may include two weighting factors: EMA1N < y EMA_activeN + S EMA_emptyN It can be seen that where the same value is used for y and 6 this is equivalent to the simple case where a single second moving average EMA2N is used in the comparison and "active" and "empty road" data points are not distinguished.
Note that, again, although exponential moving averages are used in the example other types of moving averages may be used in embodiments and accordingly the comparison may be more generally: MAIN < y MA_activeN + S MA_emptyN Where MAIN, MA_activeN and MA_emptyN are respectively the first moving average, a moving average which is a measure of how often a data point has been classified as "active" and a moving average which is a measure of how often a data point has been classified as "empty road".
In a simple embodiment, each data point in the time series is a single measurement from a LIDAR sensor. It will be appreciated that other pre-processing may occur before the method of the invention is applied, in which case the data points in the time series may be the result of, for example, downsampling raw LIDAR sensor data. In any case, in the simple embodiment so far described, each data point is initially classified either as a "non-blank" data point or as a "blank" data point. Where "active" and "empty road" data points are distinguished, each data point is classified as one of "active", "empty road" or "blank".
In other embodiments, the method can be applied at the same time as a downsampling stage. The aim of the downsampling stage is to reduce a plurality of (say around 100) raw range measurements from the LIDAR sensor into a single representative range measurement associated with a "pixel" -i.e. the resolution of the data is reduced by downsampling.
Where the method is carried out at the same time as a downsampling process, the classification of "active" or "inactive" blank pixels can take advantage of the higher resolution data which is available before downsampling. In other words, there may be available a plurality of raw point measurements associated with each data point.
One implication of this is that, of the raw point measurements associated with a particular data point, some of them may be "active", some of them may be "empty road" and some of them may be "blank". Therefore, instead of a data point having to be classified as one of "active", "empty road", or "blank", it is possible for example for a data point to be classified as "0.8 active, 0.15 empty road, 0.05 blank' (or, where "active" and "empty road" classifications are not distinguished at this stage, it would simply be 0.95 non-blank, 0.05 blank). In other words, multiple classifications are applied with weights given to the classifications associated with the proportion of raw point measurements associated with the data point having that classification.
The moving averages may then be calculated for each time series in a very similar way. For example the first moving average may be calculated as: EMA1N = aPN + (1 -a)EMA1N_1 But instead of P always being either 1 for a blank data point or 0 for a non-blank data point, P may be the weight associated with the "blank" classification, based on the number of raw point measurements associated with the data point which are blank.
For example, P = 0.05 where 5 of 100 raw point measurements associated with the data point are blank.
Likewise, the second moving average can be calculated as: EMA2N = aQN + (1 -a)EMA2N_1 Where Q is the weight associated with the non-blank classification, for example Q = 0.95 where 95 out of 100 raw point measurements associated with the data point are blank.
Where "active" and "empty road" data points are distinguished: EMA_activeN = aRN + (1 -a)EMA_activeN_i Where R is the weight associated with the "active" classification, for example R = 0.8 where 80 of 100 raw point measurements associated with the data point are active, and: EMA_emptyN = aSN + (1 -a)EMA_emptyN_i Where S is the weight associated with the "empty road" classification, for example S = 0.15 where 15 of 100 raw point measurements associated with the data point are classified as "empty road".
Preprocessing LIDAR sensor data by downsampling is common, and often takes place in hardware close to the sensors to avoid having to transmit at very high data rates to central processing. However, classifying data points at the same time as downsampling can be done with little computational expense, and so is advantageous to take advantage of the higher resolution data available before downsampling. The result of the combined downsampling and classification is a time series for each "pixel" (there will be far fewer pixels than there are raw point measurements, for example 100 raw point measurements per pixel). Each pixel in the time series will be classified as an "active blank" or an "inactive blank" or otherwise will contain a range associated with that pixel. Follow on processing can then use this data to better determine, for example, whether one large vehicle or two smaller vehicles are being detected on a road.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention, and to show more clearly how it may be carried into effect, reference will now be made by way of example only to the accompanying drawings, in which: Figure 1 shows a typical arrangement of a LIDAR sensor mounted on a gantry over a roadway, for detection and classification of vehicles; Figure 2 shows a typical time series of data points associated with a range in a particular direction as measured by the LIDAR sensor shown in Figure 1.
DESCRIPTION OF PREFERRED EMBODIMENTS
Referring firstly to Figure 1, a typical arrangement of a LIDAR sensor 10 mounted on a gantry 12 over a roadway 14 is illustrated. For clarity, only a single LIDAR sensor is shown but it will be understood that typical arrangements for example on a toll road have multiple LIDAR sensors across the gantry, to detect vehicles at any point across the full width of the road.
The sensor shown is arranged with three scan lines 16a, 16b, 16c set at about 12 degrees from each other, and positioned so that the scan lines 16a, 16b, 16c extend substantially across the width of a lane, at three positions along the length of the lane. This is a typical arrangement of a sensor.
Considering the data from a single scan line, typically a LIDAR sensor could scan across the line around 100 times per second. There could be for example several thousand points along the scan line, with a range measured at each point. Accordingly, the LI DAR sensor outputs several thousand time series of data points, each time series being associated with a point along the scan line, i.e. associated with a particular direction from the sensor in which a range is repeatedly measured.
The LIDAR sensor outputs may be spatially downsampled, to reduce the number of time series. In some embodiments the LIDAR sensor outputs may (instead or as well) be temporally downsampled, to reduce the number of datapoints associated with a particular length of real time.
Figure 2 shows a plot of three time series of data points. In this illustration each time series is plotted using a different colour. Each time series is associated with a similar lateral point along each of scan lines 16a, 16b, 16c. The time series is plotted with time along the horizontal axis, and measured height along the vertical axis. The measured height is calculated from the range measured by the LIDAR sensor in a known manner.
It can be seen in Figure 2 that an object is clearly being detected from around 815 to 820 seconds. This data may be analysed by processing software and identified as a vehicle. However there is then a gap of around 2-3 seconds during which blank measurements are received, followed by another object detected from about 823 to 826 seconds. This might be analysed by processing software and identified as a second vehicle, following about 2 seconds behind. However, in this case that analysis would be incorrect -in fact only one vehicle has driven past to cause the data output shown in Figure 2. The front cab of an articulated lorry is detected from about 815 to 820 seconds, but the top of the trailer has poor reflectivity and so there are no detections. The back of the trailer is detected from around 823 seconds.
It should be remembered that processing of this data typically happens in near real-time. So while it may be reasonably easy to deduce from Figure 2 as a whole that only one vehicle is present, it is much more difficult to determine at second 820 that a vehicle is still present. At that time, the sensor has not yet seen the data starting at about second 823, and may well trigger an action based on the vehicle having passed the sensor. According to the invention, although the sensor cannot measure the height of the vehicle at this time, it can label measurements as "active blank" and so downstream processing will be able to determine that a vehicle is still present. It will then "wait" to see the back of the vehicle, or until the blanks are classified as inactive, before triggering the action (e.g. taking a photograph of the rear numberplate).
In an embodiment of the invention, data points are processed as they are generated by the sensor (they may be processed directly as raw data points from the sensor, or the method of the invention may be applied after preprocessing, for example by downsampling).
To start with, a height above the road is calculated. This can be done using known techniques -for example: height = (measured_range * [x, y, z coefficients] * [rotation matrix]).Z + sensor height where: -[x,y,z coefficients] are coefficients of each beam's angle from a sensor to translate measured range from x,y,z coordinates with respect to the sensor's reference frame; -[rotation matrix] rotates the 3D coordinates from the sensors reference frame to our reference frame where the Z axis becomes the vertical axis pointing upwards, i.e. perpendicular to the road surface; -the ( ).Z is added to show that we taking the Z coordinate; -sensor height is the height of the sensor above ground. This is added for the translation of the reference frame such that the Z axis datum is at road level.
A height at time N HN is initially classified as either "blank", "active", or "empty road". The active classification is applied if the height is more than a threshold level, for example anything above 0.2m above the road may be classified as "active". Anything below that may be classified as "empty road". "Blank" heights are data points where no height could be calculated, because no range could be measured, because the LIDAR sensor did not detect the return signal.
At each timestep, after classifying the height at that timestep a series of moving averages are updated: EMA1N = aPN + (1 a)EMA1N_1 EMA_activeN = aRN + (1 -a)EMA_activeN-1 EMA_emptyN = aSN + (1 -a)EMA_errtptyN_i Where the values of a, P, R and S are explained above.
Then, if the current data point 11N is currently classified as "blank", updating the classification as follows: Classify as "active blank" if: EMA1N < y EMA_activeN + S EMA_emptYN And otherwise classify as an "inactive blank". The weighting factors y and 6 are explained above and in some embodiments may be the same value (in which case the "active" and "empty road" classifications do not need to be distinguished and can just be combined as a single "non-blank" classification).
The output of the method is then a time series of heights or, where there is no height, a "active blank" or "inactive blank" classification at that data point.
Downstream processing uses the "active blank" labels, in particular, to determine when a vehicle is still present at the relevant position after a series of "active" measurements have been received. In particular, near real-time processing may trigger an action based on a vehicle having passed the sensor. For example, the action may be to direct a camera to photograph a vehicle's rear numberplate. Where "active blanks" are received, this may delay triggering of such an action when otherwise it would be triggered simply due to active measurements having stopped.
The embodiments described above are provided by way of example only, and various changes and modifications will be apparent to persons skilled in the art without departing from the scope of the present invention as defined by the appended claims.
Claims (17)
- CLAIMS1. A method of processing data from a LIDAR sensor directed at a roadway, the data from the LIDAR sensor being a time series associated with each of a plurality of ranging directions, each data point in a time series representing a range from the sensor in the associated ranging direction at a particular time, the method comprising: for each time series: classifying each data point as: "blank", where the data point indicates that the return signal from the LIDAR sensor has not been detected; and/or "non-blank", where the data point indicates a range from the sensor, maintaining a first moving average associated with the time series wherein the first moving average is a measure of how often a data point in the time series has been classified as blank; maintaining at least a second moving average associated with the time series wherein the second moving average is a measure of how often a data point in the time series has been classified as non-blank; for each blank data point in the time series, making a comparison between the first and second moving averages at the time step associated with the blank data point, and classifying the blank data point as an "active blank" or an inactive blank" depending on whether the first moving average exceeds a threshold condition relative to the second moving average.
- 2. A method as claimed in claim 1, in which the non-blank data points are further classified as: "empty road", where the data point indicates a range consistent with the LIDAR sensor detecting the road surface; and/or "active", where the data point indicates a range consistent with a vehicle present on the road.
- 3. A method of processing data as claimed in claim 2, in which the classification of non-blank data points into "active" or "empty road" classifications is based on comparing the range associated with the data point with a threshold.
- 4. A method of processing data as claimed in any of the preceding claims, in which the first moving average is an exponential moving average.
- 5. A method of processing data as claimed in claim 4, in which the first moving average is calculated as: EMA1N = aPN + (1 -a)EMA1N-1 where P is a value associated with the data point at time step N.
- 6. A method of processing data as claimed in claim 5, in which P is given a first value where the data point is blank and a second value where the data point is non-blank.
- 7. A method of processing data as claimed in claim 6, in which P is 1 when the data point is blank and P is 0 when the data point is non-blank.
- 8. A method of processing data as claimed in any of the preceding claims, in which the second moving average is an exponential moving average.
- 9. A method of processing data as claimed in claim 8, in which the second moving average is calculated as: EMA2N = aQN + (1 -a)EMA2N_i where Q is a value associated with the data point at time step N.
- 10. A method of processing data as claimed in claim 9, in which Q is 1 when the data point is non-blank and Q is 0 when the data point is blank.
- 11. A method of processing data as claimed in any of the preceding claims, in which each blank data point at time step N in the time series is classified as "active blank" where MAIN < fl MA2N, and "inactive blank" otherwise, where MAIN is the first moving average updated to timestep N, and MA2N is the second moving average updated to timestep N, and p is a weighting factor.
- 12. A method of processing data as claimed in claim 2, in which the second moving average is composed of a moving average which is a measure of how often a data point in the time series has been classified as "empty road" and a moving average which is a measure of how often a data point in the time series has been classified as "active".
- 13. A method of processing data as claimed in claim 12, in which each blank data point at time step N in the time series is classified as "active blank" where: MAIN < y MA_activeN + S MA_emptYN and otherwise as "inactive blank", where y and 6 are weighting factors, MA_activeN is the moving average which is a measure of how often a data point in the time series has been classified as "active", updated to timestep N, and MA_emptyN is the moving average which is a measure of how often a data point in the time series has been classified as "empty", updated to timestep N.
- 14. A method of processing data as claimed in claim 1, in which a plurality of raw point measurements are associated with each data point, and in which a data point is classified as both "blank" and "non-blank" with different weights according to the number of blank or non-blank raw point measurements associated with that data point.
- 15. A method of processing data as claims in claim 2, in which a plurality of raw point measurements are associated with each data point, and in which a data point is classified as all of "blank", "empty road", and "active" with different weights according to the number of "blank", "empty road" and "active" raw point measurements associated with that data point.
- 16. A method of processing data as claimed in claim 14 or claim 15, in which the first moving average is calculated as: EMA1N = aPN + (1 -a)EMA1N-i where P is the weight associated with the "blank" classification.
- 17. A method of processing data as claimed in any of claims 14 to 16, in which the second moving average is calculated as: EMA2N = aQN + (1 -a)EMA2N_l where Q is the weight associated with the "non-blank" classification.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2318079.7A GB2635771A (en) | 2023-11-27 | 2023-11-27 | LIDAR data processing |
| PCT/GB2024/052973 WO2025114697A1 (en) | 2023-11-27 | 2024-11-27 | Lidar data processing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2318079.7A GB2635771A (en) | 2023-11-27 | 2023-11-27 | LIDAR data processing |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| GB202318079D0 GB202318079D0 (en) | 2024-01-10 |
| GB2635771A true GB2635771A (en) | 2025-05-28 |
| GB2635771A8 GB2635771A8 (en) | 2025-06-25 |
Family
ID=89429179
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2318079.7A Pending GB2635771A (en) | 2023-11-27 | 2023-11-27 | LIDAR data processing |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2635771A (en) |
| WO (1) | WO2025114697A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6624418B1 (en) * | 1997-02-08 | 2003-09-23 | Conti Temic Microelectronic Gmbh | Optical transmitting and receiving device |
| US20050200840A1 (en) * | 2004-03-09 | 2005-09-15 | Takekazu Terui | Object detecting apparatus |
| US20080210881A1 (en) * | 2005-07-29 | 2008-09-04 | Qinetiq Limited | Laser Measurement Device and Method |
| CN115390049A (en) * | 2022-08-01 | 2022-11-25 | 奥比中光科技集团股份有限公司 | Transmitting module dirt monitoring system, dirt detection method and related equipment |
| US20230213630A1 (en) * | 2020-06-09 | 2023-07-06 | Mercedes-Benz Group AG | Method and device for identifying contamination on a protective screen of a lidar sensor |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| PT2767964E (en) * | 2013-02-14 | 2015-02-10 | Kapsch Trafficcom Ag | Device for vehicle measurement |
| JP6580982B2 (en) * | 2015-12-25 | 2019-09-25 | 日立建機株式会社 | Off-road dump truck and obstacle discrimination device |
-
2023
- 2023-11-27 GB GB2318079.7A patent/GB2635771A/en active Pending
-
2024
- 2024-11-27 WO PCT/GB2024/052973 patent/WO2025114697A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6624418B1 (en) * | 1997-02-08 | 2003-09-23 | Conti Temic Microelectronic Gmbh | Optical transmitting and receiving device |
| US20050200840A1 (en) * | 2004-03-09 | 2005-09-15 | Takekazu Terui | Object detecting apparatus |
| US20080210881A1 (en) * | 2005-07-29 | 2008-09-04 | Qinetiq Limited | Laser Measurement Device and Method |
| US20230213630A1 (en) * | 2020-06-09 | 2023-07-06 | Mercedes-Benz Group AG | Method and device for identifying contamination on a protective screen of a lidar sensor |
| CN115390049A (en) * | 2022-08-01 | 2022-11-25 | 奥比中光科技集团股份有限公司 | Transmitting module dirt monitoring system, dirt detection method and related equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2635771A8 (en) | 2025-06-25 |
| WO2025114697A1 (en) | 2025-06-05 |
| GB202318079D0 (en) | 2024-01-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Heinzler et al. | Weather influence and classification with automotive lidar sensors | |
| CN111712731B (en) | Target detection method, system and mobile platform | |
| CN110764108B (en) | Obstacle detection method and device for port automatic driving scene | |
| Sezgin et al. | Safe autonomous driving in adverse weather: Sensor evaluation and performance monitoring | |
| US20160178802A1 (en) | Road surface reflectivity detection by lidar sensor | |
| US20160010983A1 (en) | Integrated Vehicular System for Low Speed Collision Avoidance | |
| CN111880196A (en) | Unmanned mine car anti-interference method, system and computer equipment | |
| US11423661B2 (en) | Object recognition apparatus | |
| JP2017220076A (en) | Vehicle type discrimination device and vehicle type discrimination method | |
| CN115909281A (en) | Matching fusion obstacle detection method, system, electronic equipment and storage medium | |
| CN112597839A (en) | Road boundary detection method based on vehicle-mounted millimeter wave radar | |
| CN111899562A (en) | Vehicle meeting prompting method for curve blind area | |
| CN117173666B (en) | Automatic driving target identification method and system for unstructured road | |
| JP5718726B2 (en) | Vehicle periphery monitoring device | |
| WO2020191978A1 (en) | Sar imaging method and imaging system thereof | |
| KR20230174730A (en) | Method for determining an approximate object position of a dynamic object, computer program, device and vehicle | |
| CN112201040A (en) | A traffic data cleaning method and system based on millimeter wave radar data | |
| CN114609644A (en) | Method for supplementing a probe object by a lidar system | |
| DE102010062378A1 (en) | Method for environmental detection, involves scanning coverage area of environment with scanning beam of light detecting and ranging sensor and detecting radiation reflecting at objects in environment | |
| Hasirlioglu et al. | Raindrops on the windshield: Performance assessment of camera-based object detection | |
| GB2635771A (en) | LIDAR data processing | |
| CN112485770B (en) | Millimeter wave radar full-FOV limited scene identification method, storage medium and vehicle-mounted equipment | |
| JP4033106B2 (en) | Ranging performance degradation detection device for vehicles | |
| Ogawa et al. | TOF-LIDAR signal processing using the CFAR detector | |
| JP3399104B2 (en) | Leading vehicle detection device and approach warning device |