US20180211394A1 - Method for identifying an object in a region surrounding a motor vehicle, driver assistance system and motor vehicle - Google Patents
Method for identifying an object in a region surrounding a motor vehicle, driver assistance system and motor vehicle Download PDFInfo
- Publication number
- US20180211394A1 US20180211394A1 US15/748,098 US201615748098A US2018211394A1 US 20180211394 A1 US20180211394 A1 US 20180211394A1 US 201615748098 A US201615748098 A US 201615748098A US 2018211394 A1 US2018211394 A1 US 2018211394A1
- Authority
- US
- United States
- Prior art keywords
- captured
- image
- movement sequence
- motor vehicle
- basis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2552/00—Input parameters relating to infrastructure
- B60W2552/05—Type of road, e.g. motorways, local streets, paved or unpaved roads
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2555/00—Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
- B60W2555/60—Traffic rules, e.g. speed limits or right of way
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
Definitions
- the invention relates to a method for identifying an object in a region surrounding a motor vehicle as a stationary object, wherein in the method, the surrounding region is captured in images using a vehicle-side capture device and the object is detected in at least one of the captured images using an image processing device.
- the invention additionally relates to a driver assistance system and to a motor vehicle.
- Said information can be provided to a driver assistance system of the motor vehicle, for example an automated full-beam regulation system.
- the information extracted from the images can be used, for example, to regulate the full beam emitted by the motor vehicle so that, on the one hand, the region surrounding the motor vehicle, in particular a road of the motor vehicle, is illuminated as well as possible, and, on the other, so that other road users, such as oncoming vehicles, are not dazzled.
- the captured objects in the region surrounding the motor vehicle initially need to be classified or identified so that oncoming vehicles, for example, are recognized as such in the first place.
- a method according to the invention serves for identifying an object in a region surrounding a motor vehicle as a stationary object.
- the surrounding region is captured in images using a vehicle-side capture device and the object is recognized in at least one of the captured images using an image processing device.
- a first position of the object in the surrounding region relative to the motor vehicle is estimated on the basis of at least one first captured image
- a movement sequence of the object in image coordinates is determined on the basis of the first image and at least one second captured image
- a first movement sequence that characterizes a stationary object in the image coordinates is determined proceeding from the first estimated position in the surrounding region
- the captured object is identified as a stationary object on the basis of a comparison of the movement sequence of the captured object to the first characterizing movement sequence.
- the method is consequently used to differentiate between whether the objects in the surrounding region are stationary objects or non-stationary objects. If an object is identified using the method as being non-stationary, it is assumed that the object is a dynamic object.
- the surrounding region in particular the surrounding region located in front of, in particular laterally in front of, the motor vehicle in the driving direction is captured in images using the vehicle-side capture device which comprises, for example, at least one camera.
- the camera is in particular designed here to two-dimensionally capture the surrounding region.
- the image processing device is designed to recognize a two-dimensional projection of the object in the at least one captured image from image data of the at least one captured image and to thus recognize the object in the surrounding region.
- a first position of the object i.e. of the real object
- the first image can be the image in which the object was recognized by the image processing device.
- the first position of the real object in the surrounding region can be estimated on the basis of the two-dimensional projection of the object on the first image, for example on the basis of two-dimensional geometric measurements of the object on the image.
- the first position is here determined in particular in a world coordinate system and describes a first, possible distance of the captured object from the motor vehicle.
- the world coordinate system can be, for example, a vehicle coordinate system having a first axis along a vehicle lateral direction, a second axis along a vehicle longitudinal direction, and a third axis along a vehicle height direction.
- the distance of the object from the motor vehicle is thus determined in particular only on the basis of the captured images.
- the distance is in particular not directly measured. Consequently, the camera can have a particularly simple design and does not need to be an expensive time-of-flight camera, for example.
- a movement sequence of the object that is to say the projection of the object, in the image coordinates is determined.
- the image coordinates are here determined in a two-dimensional image coordinate system having a first, for example horizontal, image axis and a second, for example vertical, image axis.
- the movement sequence in the image coordinates is determined here such that, in the first image, an image position of the object, for example the image position of a projection point of the object, in the image coordinates is determined and, in the at least one second image, an image position of the object, for example the image position of the projection point of the object in the second image, in the image coordinates is determined.
- the change in the image positions of the object between the two images gives the movement sequence of the object.
- the image position change of the object that is to say the movement of the projection of the object in the recorded images, is obtained by way of the motor vehicle travelling on a road. Due to the motor vehicle travelling, the image positions of the object in the image change and thus the image coordinates of the object between the at least two images change.
- the movement sequence of the captured object in the image coordinates here corresponds to an instantaneous movement sequence, or an actual movement sequence, of the projection of the object in image coordinates.
- the time period between two successively recorded images can be, for example, between 50 ms and 80 ms, in particular 60 ms. As a result, an image position of the object can be determined every 60 ms, for example.
- the first movement sequence that characterizes a stationary object in the image coordinates is now determined.
- the first characterizing movement sequence here corresponds to a first predetermined movement sequence that the projection of the object has if the object is a stationary object at the first estimated position.
- image positions of a specified stationary reference object at the first estimated position can be determined for the at least two images.
- the movement sequence of the captured object and the first characterizing movement sequence can be determined as respectively one trajectory and be represented, for example, in one of the captured images.
- the instantaneous movement sequence and the first predetermined movement sequence can then be compared. To this end, for example a distance between the trajectories can be determined in the image. If, for example, the instantaneous movement sequence is congruent with the first predetermined movement sequence, or if the instantaneous movement sequence deviates from the first predetermined movement sequence by at most a specified threshold value, the captured object can be identified as a stationary object. However, if the instantaneous movement sequence deviates from the first predetermined movement sequence by more than the specified threshold value, the captured object is identified as a dynamic object.
- the method according to the invention can be used therefore to classify or identify in a particularly simple manner the object in the surrounding region from captured images of the surrounding region.
- a second position of the object in the surrounding region relative to the motor vehicle is estimated preferably on the basis of the at least one first captured image, and, proceeding from the second estimated position in the surrounding region, a second movement sequence that characterizes a stationary object in the image coordinates is determined, and the captured object is identified on the basis of the comparison of the movement sequence of the captured object to the second characterizing movement sequence.
- a second, possible distance of the object from the motor vehicle is estimated. The second distance is also determined in the world coordinate system.
- the invention is here based on the finding that it is not possible to accurately determine the distance of the object on the basis of the image that is two-dimensionally captured by the capture device, but can vary in particular along a camera axis. Therefore, two possible, plausible distances of the object from the motor vehicle are estimated.
- the second characterizing movement sequence here corresponds to a second predetermined movement sequence which the object has if the object is a stationary object at the second estimated position.
- the second characterizing movement sequence can likewise be determined as a trajectory and be represented in the captured image.
- the instantaneous movement sequence is compared to the second predetermined movement sequence.
- the captured object can then be identified as being stationary if the instantaneous movement sequence is congruent with the second predetermined movement sequence or deviates from the second predetermined movement sequence at most by a further specified threshold value.
- the captured object is identified as a stationary object if the movement sequence of the captured object is within a corridor formed by the first and the second characterizing movement sequences.
- the first and the second movement sequence in the image coordinates form the corridor, that is to say a movement region within which the captured object moves, if it is a stationary object.
- a first height that is characteristic of the stationary object is specified for the captured object and the first position in the surrounding region relative to the motor vehicle is determined on the basis of the at least one first image and on the basis of the first height specified for the object
- a second height that is characteristic of the stationary object is specified for the captured object and the second position in the surrounding region relative to the motor vehicle is determined on the basis of the at least one second image and on the basis of the second height specified for the object.
- the first height here corresponds to a maximum height that the predetermined stationary reference object can have, for example, and the second height corresponds to a minimum height that the predetermined stationary reference object can have.
- the minimum and the maximum heights can be stored, for example, for the image processing device, with the result that the positions of the object in world coordinates can be determined simply and quickly on the basis of the at least one captured first image using the stored heights. Consequently, the distances of the object from the motor vehicle can be plausibly and quickly determined, in particular without the need to directly measure the distance. It is thus possible to dispense with a separate sensor device for distance measurement and/or the configuration of the cameras as a time-of-flight camera.
- a vehicle speed and/or an angular rate of the motor vehicle may be captured to determine a characterizing movement sequence.
- the vehicle speed and/or the angular rate it is thus possible to determine the position of the object, proceeding from the first estimated position and the second estimated position for each time point at which an image is captured, in the case of a stationary object in the world coordinate system, that is to say for example the position of the reference object in the surrounding region relative to the motor vehicle, and to convert it into the image positions in the corresponding image coordinates of the image captured at that time point.
- the characterizing movement sequences are determined under the assumption that the motor vehicle travels on an even road.
- An even road is here a road that has neither bumps nor potholes. That means that it is assumed that the motor vehicle performs no pitch movement, i.e. no rotational movement about the vehicle transverse axis, during travelling. In other words, a pitch angle of the motor vehicle does not change during travel.
- the characterizing movement sequences can be determined without the need to go to the effort of capturing the pitch angle of the motor vehicle.
- only image coordinates of a predetermined direction are compared during the comparison of the movement sequence of the captured object to the characterizing movement sequence.
- Preferably, only image coordinates in the horizontal direction are compared to one another.
- the invention is here based on the finding that, even if the captured object is a stationary object, the vertical image coordinates of the captured object would differ significantly from the vertical image coordinates of the characterizing movement sequences if the motor vehicle travels on an uneven road having bumps. Due to the uneven road, the motor vehicle performs a pitch movement during travel, as a result of which the captured object appears to move in the vertical direction in the image.
- This apparent vertical movement of the captured object in the image coordinates can result in the movement sequence of the captured object, determined on the basis of the image positions of the projection of the object, being outside the corridor formed by the two characterizing movement sequences.
- the characterizing movement sequences are specifically determined in particular under the assumption that the motor vehicle moves along an even road having no bumps. This means that the apparent vertical movement of the stationary object is not reflected in the characterizing movement sequences. For this reason, during the examination as to whether the movement sequence of the captured object is within the corridor, only the image coordinates in the horizontal direction are taken into consideration. The image coordinates in the vertical direction are not taken into consideration.
- an object can be reliably identified as a stationary object in particular without the need to separately capture the pitch angle even in the case of a road on which the motor vehicle performs a pitch movement, for example due to potholes.
- the position of the captured object is estimated on the basis of a first image captured at a current time point, and the movement sequence of the captured object is determined on the basis of at least one second image captured at a time point before the current time point. That means that the current position of the object in the world coordinates is estimated and, proceeding from the current position, the already performed movement of the captured object is determined.
- the movement sequence of the captured object is determined retrospectively, i.e. on the basis of the recorded history of image coordinates.
- the characterizing movement sequence is also hereby retrospectively determined, i.e. a movement that the captured object would have performed if it were a stationary object.
- the object can be classified particularly quickly as a stationary object or as a dynamic object.
- the object as a stationary, light-reflecting object, in particular a delineator post or a road sign, is preferably identified as the predetermined object.
- a light-reflecting object is here understood to mean an object that has an active element reflecting light.
- Such an element can be, for example, a reflector which reflects the light, such as for example like a cat's-eye.
- Such objects can be, for example, delineator posts or reflector posts, which are generally located at a side of the road of the motor vehicle and facilitate orientation for the driver of the motor vehicle in particular in the dark or at night or when visibility is poor.
- Such objects can also be road signs that actively reflect light for improved visibility.
- the object can be identified as a stationary light-reflecting object, in particular as a delineator post and/or as a road sign, if it has been detected, on the basis of the at least one captured first image and/or second image, that the captured object emits light in a directional or defined manner in the direction of the motor vehicle and it has been detected, on the basis of the comparison of the movement sequence of the captured light-emitting object to a movement sequence characteristic of a stationary object, that the captured object is stationary.
- it is determined, for example on the basis of the first detected image, whether the object emits light in a directional manner in the direction of the motor vehicle. It is also possible to estimate the position of the object on the basis of this image.
- the movement sequence of the captured light-emitting object can be determined and the object can be identified as being stationary on the basis of the comparison to the characterizing movement sequence.
- regulation of a light emitted by a headlight of the motor vehicle for illuminating a road of the motor vehicle is blocked if the object was identified as a stationary object which actively reflects light.
- no automatic full-beam regulation for example dipping of the light, is performed if it has been found that the captured object is not an oncoming vehicle but for example a delineator post or a road sign. It is thus possible to prevent the automatic full-beam regulation from being performed unnecessarily.
- the invention additionally relates to a driver assistance system, in particular for full-beam regulation, for identifying an object in a region surrounding a motor vehicle as a stationary object.
- the driver assistance system comprises a vehicle-side capture device, for example a camera, for capturing the surrounding region in images and an image processing device for recognizing the object in at least one of the captured images.
- the image processing device is configured to estimate a first position of the object in the surrounding region relative to the motor vehicle on the basis of at least one first captured image, to determine a movement sequence of the object in image coordinates on the basis of the first image and at least one second captured image, to determine a first movement sequence that characterizes a stationary object in the image coordinates proceeding from the first estimated position in the surrounding region, and to identify the captured object as a stationary object on the basis of a comparison of the movement sequence of the captured object to the first characterizing movement sequence.
- a motor vehicle according to the invention comprises a driver assistance system according to the invention.
- the motor vehicle is configured in particular as a passenger vehicle.
- the driver assistance system can here control, for example, headlights of the motor vehicle to regulate the light emitted by the headlights.
- top”, “bottom”, “front”, “rear”, “horizontal” (a-direction), “vertical” (b-direction) etc. indicate positions and orientations based on appropriate observation of the captured images.
- the terms “internal”, “external”, “lateral”, “right”, “left”, “top”, “bottom”, “vehicle height axis” (z-direction), “vehicle longitudinal axis” (y-direction), “vehicle transverse axis” (x-direction) etc. indicate positions and orientations based on an observer standing in front of the vehicle and looking in the direction of the vehicle longitudinal axis.
- FIG. 1 shows a schematic illustration of an embodiment of a motor vehicle according to the invention
- FIGS. 2 a , 2 b , 2 c show schematic illustrations of image which were recorded by a vehicle-side capture device
- FIG. 3 shows a schematic illustration of an embodiment or determining the position of a captured object.
- FIG. 1 shows a motor vehicle 1 having a driver assistance system 2 .
- the driver assistance system 2 comprises a vehicle-side capture device 3 , which can comprise at least one camera.
- the capture device 3 serves for capturing a surrounding region 4 of the motor vehicle 1 in two-dimensional images 12 , 14 (see FIGS. 2 a , 2 b , 2 c ).
- the capture device captures in particular images 12 , 14 of a surrounding region 4 located—in the direction of travel (y-direction)—in front of and laterally in front of the motor vehicle 1 as the motor vehicle 1 travels on a road 9 .
- the driver assistance system 2 additionally comprises an image processing device 5 , which is configured to process the images 12 , 14 of the surrounding region 4 captured by the capture device 3 and to extract for example information from the captured images 12 , 14 .
- the driver assistance system 2 which serves in particular for full-beam regulation, can control, based on the information extracted from the images 12 , 14 , headlights 11 of the motor vehicle 1 in order to regulate or influence the light emitted by the headlights 11 .
- the driver assistance system 2 can have a speed sensor 6 for capturing a speed of the motor vehicle 1 travelling on the road 9 and an angular rate sensor 7 for capturing an angular rate of the motor vehicle 1 about a vehicle height axis (z-direction) of the motor vehicle 1 .
- the image processing device 5 is configured for recognizing an object 8 in at least one of the images 12 , 14 captured by the capture device 3 .
- the object 8 has, in the surrounding region 4 , a position P(x, y, z) relative to the motor vehicle 1 .
- the position P is here a position in a world coordinate system x, y, z, which is here defined as a vehicle coordinate system.
- the x-axis of the world coordinate system here runs along a vehicle lateral direction
- the y-axis runs along a vehicle longitudinal direction
- the z-axis runs along a vehicle height direction.
- the image processing device 5 is configured to classify the object 8 , that is to say to identify whether the object 8 is a stationary object that is positionally fixed in the surrounding region 4 , or a dynamic object. If the captured object 8 is a dynamic object, for example in the form of an oncoming vehicle, a control device 10 of the driver assistance system 2 can, for example, control the headlights 11 of the motor vehicle 1 to dip the light emitted by the headlights 11 and thus prevent dazzling a driver of the oncoming vehicle. Regulating the light emitted by the headlights 11 in this way should, however, be blocked if the image processing device 5 has detected that the object 8 is a stationary object, for example in the form of a delineator post 13 or a road sign.
- FIG. 2 a shows an image 12 of the surrounding region 4 recorded at a first time point by the capture device 3 .
- the first image 12 shows the surrounding region 4 ′ as a two-dimensional projection of the surrounding region 4 , the road 9 ′ as a two-dimensional projection of the road 9 , and the object 8 ′ as a two-dimensional projection of the object 8 at the first, in particular current, time point.
- An image position P′ of the object 8 ′ is indicated using image coordinates a, b of a two-dimensional image coordinate system.
- the object 8 ′ has in the image 12 a first image position P′(a 1 , b 1 ). Based on the two-dimensional first image 12 , a first position P 1 (x 1 , y 1 , z 1 ) of the real object 8 (see FIG. 3 ) relative to the motor vehicle 1 , that is to say a first, possible distance of the object 8 from the motor vehicle 1 , and a second position P 2 (x 2 , y 2 , z 2 ) of the real object 8 relative to the motor vehicle 1 , that is to say a second, possible distance of the object 8 from the motor vehicle 1 , are estimated.
- the positions P 1 , P 2 are here determined in the world coordinate system (x, y, z).
- a first, maximum height h 1 for the object 8 is specified or assumed, as shown in FIG. 3 .
- a second, minimum height h 2 for the object 8 is assumed.
- the possible positions P 1 , P 2 of the real object 8 can then be determined on the basis of the image 12 and the heights h 1 , h 2 .
- the actual position P(x, y, z) of the object 8 is here located along a camera axis 20 between the two positions P 1 , P 2 .
- the heights h 1 , h 2 can here be, for example, a typical specified maximum height and a typical specified minimum height of a delineator post 13 .
- a movement sequence 15 (see FIG. 2 c ) of the object 8 ′, 8 ′′, that is to say a movement sequence 15 of the projection of the captured object 8 , in the image coordinates a, b is determined.
- a movement sequence 15 of the projection of the captured object 8 in the image coordinates a, b is determined.
- an image position P′ of the object 8 ′ in the first image 12 and an image position P′′(a 2 , b 2 ) of the object 8 ′′ in a second image 14 , recorded at a second time point can be determined.
- the second image 14 shows the surrounding region 4 ′′ as a two-dimensional projection of the surrounding region 4 , the road 9 ′′ as a two-dimensional projection of the road 9 , and the object 8 ′′ as a two-dimensional projection of the object 8 at the second time point.
- the second time point here occurs in particular before the first, current time point.
- the change in the image positions P′, P′′ can be determined as a trajectory characterizing the movement sequence 15 and be represented, for example, in the first image 12 (see FIG. 2 c ).
- movement sequences 16 , 17 which are characteristic of a stationary object, in image coordinates a, b are determined.
- the first characterizing movement sequence 16 is here characteristic of a stationary object at the first, estimated position P 1
- the second characteristic movement sequence 17 is characteristic of a stationary object at the second, estimated position P 2 .
- the characterizing movement sequences 16 , 17 are determined for example by determining the image positions of a stationary reference object in the images 12 , 14 in image coordinates a, b. To this end, for the image positions of the stationary reference object in the first image 12 , the first position P 1 and the second position P 2 are converted into image coordinates a, b.
- a further, first position of the stationary reference object in world coordinates x, y, z and a further, second position of the stationary reference object in world coordinates x, y, z, which the reference object has at the second time point, are converted into image coordinates a, b.
- the further, first and the further, second positions can be determined, for example, on the basis of the speed and/or the angular rate of the motor vehicle 1 , which is captured using the speed sensor 6 and/or using the angular rate sensor 7 .
- the characterizing movement sequences 16 , 17 are determined under the assumption that the road 9 , on which the motor vehicle 1 travels, is even and has no potholes or bumps, for example.
- the movement sequences 16 , 17 which can likewise be represented as trajectories in the first image 12 (see FIG. 2 c ), form a corridor 18 .
- the captured object 8 is then identified as a stationary object if the movement sequence 15 of the captured object 8 ′, 8 ′′ falls within the corridor 18 .
- FIG. 2 c shows that the horizontal coordinates a of the movement sequence 15 are located within the corridor 18 , but not the vertical coordinates b of the movement sequence 15 .
- the movement sequence 15 shows the movement sequence of the object 8 ′, 8 ′′ for the motor vehicle 1 here as it travels on a road 9 , which has bumps. Due to these bumps, the motor vehicle 1 performs a pitch movement about the vehicle transverse axis, that is to say a pitch movement along the z-axis. Due to said pitch movement, the object 8 ′, 8 ′′ appears to move in the height direction (b-direction) in the image coordinates a, b.
- the captured object 8 can be identified as a stationary object.
- the delineator post 13 it is additionally possible to examine whether the object 8 actively reflects light in the direction of the motor vehicle 1 .
- the delineator post 13 has reflectors 19 .
- the image processing device 5 can, for example on the basis of the first image 12 , detect that the object 8 ′, 8 ′′ emits light in the direction of the motor vehicle 1 .
- the image processing device 5 can capture the object 8 as a stationary object on the basis of the movement sequences 15 , 16 , 17 and thus identify the object 8 correctly as the delineator post 13 .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Mathematical Physics (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Lighting Device Outwards From Vehicle And Optical Signal (AREA)
Abstract
Description
- The invention relates to a method for identifying an object in a region surrounding a motor vehicle as a stationary object, wherein in the method, the surrounding region is captured in images using a vehicle-side capture device and the object is detected in at least one of the captured images using an image processing device. The invention additionally relates to a driver assistance system and to a motor vehicle.
- It is already known from the prior art to mount capture devices, such as cameras, on motor vehicles to thereby capture a region surrounding the motor vehicle in images. Information can be extracted from said images, for example by recognizing objects in the images using an image processing device. DE 10 2008 063 328 A1, for example, proposes to determine a change in a pitch angle of a motor vehicle on the basis of captured objects.
- Said information can be provided to a driver assistance system of the motor vehicle, for example an automated full-beam regulation system. The information extracted from the images can be used, for example, to regulate the full beam emitted by the motor vehicle so that, on the one hand, the region surrounding the motor vehicle, in particular a road of the motor vehicle, is illuminated as well as possible, and, on the other, so that other road users, such as oncoming vehicles, are not dazzled. To this end, the captured objects in the region surrounding the motor vehicle initially need to be classified or identified so that oncoming vehicles, for example, are recognized as such in the first place.
- It is the object of the present invention to be able to reliably and simply identify objects in a region surrounding a motor vehicle.
- This object is achieved according to the invention by way of a method, a driver assistance system and a motor vehicle having the features in accordance with the independent patent claims.
- A method according to the invention serves for identifying an object in a region surrounding a motor vehicle as a stationary object. In the course of the method, the surrounding region is captured in images using a vehicle-side capture device and the object is recognized in at least one of the captured images using an image processing device. Moreover, a first position of the object in the surrounding region relative to the motor vehicle is estimated on the basis of at least one first captured image, a movement sequence of the object in image coordinates is determined on the basis of the first image and at least one second captured image, a first movement sequence that characterizes a stationary object in the image coordinates is determined proceeding from the first estimated position in the surrounding region, and the captured object is identified as a stationary object on the basis of a comparison of the movement sequence of the captured object to the first characterizing movement sequence.
- The method is consequently used to differentiate between whether the objects in the surrounding region are stationary objects or non-stationary objects. If an object is identified using the method as being non-stationary, it is assumed that the object is a dynamic object. In the method, the surrounding region, in particular the surrounding region located in front of, in particular laterally in front of, the motor vehicle in the driving direction is captured in images using the vehicle-side capture device which comprises, for example, at least one camera. The camera is in particular designed here to two-dimensionally capture the surrounding region. The image processing device is designed to recognize a two-dimensional projection of the object in the at least one captured image from image data of the at least one captured image and to thus recognize the object in the surrounding region.
- According to the invention, provision is now made for a first position of the object, i.e. of the real object, in the surrounding region relative to the motor vehicle to be estimated on the basis of the first captured image. The first image can be the image in which the object was recognized by the image processing device. The first position of the real object in the surrounding region can be estimated on the basis of the two-dimensional projection of the object on the first image, for example on the basis of two-dimensional geometric measurements of the object on the image. The first position is here determined in particular in a world coordinate system and describes a first, possible distance of the captured object from the motor vehicle. The world coordinate system can be, for example, a vehicle coordinate system having a first axis along a vehicle lateral direction, a second axis along a vehicle longitudinal direction, and a third axis along a vehicle height direction. The distance of the object from the motor vehicle is thus determined in particular only on the basis of the captured images. The distance is in particular not directly measured. Consequently, the camera can have a particularly simple design and does not need to be an expensive time-of-flight camera, for example.
- On the basis of the first image and at least a second image, a movement sequence of the object, that is to say the projection of the object, in the image coordinates is determined. The image coordinates are here determined in a two-dimensional image coordinate system having a first, for example horizontal, image axis and a second, for example vertical, image axis. The movement sequence in the image coordinates is determined here such that, in the first image, an image position of the object, for example the image position of a projection point of the object, in the image coordinates is determined and, in the at least one second image, an image position of the object, for example the image position of the projection point of the object in the second image, in the image coordinates is determined. The change in the image positions of the object between the two images here gives the movement sequence of the object. The image position change of the object, that is to say the movement of the projection of the object in the recorded images, is obtained by way of the motor vehicle travelling on a road. Due to the motor vehicle travelling, the image positions of the object in the image change and thus the image coordinates of the object between the at least two images change. The movement sequence of the captured object in the image coordinates here corresponds to an instantaneous movement sequence, or an actual movement sequence, of the projection of the object in image coordinates. The time period between two successively recorded images can be, for example, between 50 ms and 80 ms, in particular 60 ms. As a result, an image position of the object can be determined every 60 ms, for example.
- In addition, the first movement sequence that characterizes a stationary object in the image coordinates is now determined. The first characterizing movement sequence here corresponds to a first predetermined movement sequence that the projection of the object has if the object is a stationary object at the first estimated position. To determine the first characterizing movement sequence, for example image positions of a specified stationary reference object at the first estimated position can be determined for the at least two images. The movement sequence of the captured object and the first characterizing movement sequence can be determined as respectively one trajectory and be represented, for example, in one of the captured images.
- To identify the object as a stationary object, the instantaneous movement sequence and the first predetermined movement sequence can then be compared. To this end, for example a distance between the trajectories can be determined in the image. If, for example, the instantaneous movement sequence is congruent with the first predetermined movement sequence, or if the instantaneous movement sequence deviates from the first predetermined movement sequence by at most a specified threshold value, the captured object can be identified as a stationary object. However, if the instantaneous movement sequence deviates from the first predetermined movement sequence by more than the specified threshold value, the captured object is identified as a dynamic object.
- The method according to the invention can be used therefore to classify or identify in a particularly simple manner the object in the surrounding region from captured images of the surrounding region.
- In addition, a second position of the object in the surrounding region relative to the motor vehicle is estimated preferably on the basis of the at least one first captured image, and, proceeding from the second estimated position in the surrounding region, a second movement sequence that characterizes a stationary object in the image coordinates is determined, and the captured object is identified on the basis of the comparison of the movement sequence of the captured object to the second characterizing movement sequence. In other words, in addition to the first, possible distance of the object from the motor vehicle, a second, possible distance of the object from the motor vehicle is estimated. The second distance is also determined in the world coordinate system.
- The invention is here based on the finding that it is not possible to accurately determine the distance of the object on the basis of the image that is two-dimensionally captured by the capture device, but can vary in particular along a camera axis. Therefore, two possible, plausible distances of the object from the motor vehicle are estimated. Thereupon, it is possible, in addition to the first characterizing movement sequence in image coordinates, which is determined for the reference object at the first position in world coordinates, for the second characterizing movement sequence in image coordinates to be determined for the reference object at the second position in world coordinates. The second characterizing movement sequence here corresponds to a second predetermined movement sequence which the object has if the object is a stationary object at the second estimated position. The second characterizing movement sequence can likewise be determined as a trajectory and be represented in the captured image. In addition, the instantaneous movement sequence is compared to the second predetermined movement sequence. The captured object can then be identified as being stationary if the instantaneous movement sequence is congruent with the second predetermined movement sequence or deviates from the second predetermined movement sequence at most by a further specified threshold value. By determining the second predetermined movement sequence and due to the comparison of the instantaneous movement sequence to the two predetermined movement sequences, the captured object can be particularly reliably classified and identified as a stationary object.
- With particular preference, the captured object is identified as a stationary object if the movement sequence of the captured object is within a corridor formed by the first and the second characterizing movement sequences. In other words, the first and the second movement sequence in the image coordinates form the corridor, that is to say a movement region within which the captured object moves, if it is a stationary object. Provision can also be made here for the corridor to be extended by a tolerance region, with the result that the captured object is identified as a stationary object even if the movement sequence of the captured object is outside the corridor, but within the tolerance region. By representing all movement sequences as trajectories in one of the images, it is possible in a particularly simple manner to determine whether the trajectory of the captured object falls within the corridor.
- It has proven advantageous if a first height that is characteristic of the stationary object is specified for the captured object and the first position in the surrounding region relative to the motor vehicle is determined on the basis of the at least one first image and on the basis of the first height specified for the object, and a second height that is characteristic of the stationary object is specified for the captured object and the second position in the surrounding region relative to the motor vehicle is determined on the basis of the at least one second image and on the basis of the second height specified for the object. In order then to plausibly estimate the positions of the object in world coordinates on the basis of the two-dimensional images, two different, plausible heights for the object are specified. The first height here corresponds to a maximum height that the predetermined stationary reference object can have, for example, and the second height corresponds to a minimum height that the predetermined stationary reference object can have. The minimum and the maximum heights can be stored, for example, for the image processing device, with the result that the positions of the object in world coordinates can be determined simply and quickly on the basis of the at least one captured first image using the stored heights. Consequently, the distances of the object from the motor vehicle can be plausibly and quickly determined, in particular without the need to directly measure the distance. It is thus possible to dispense with a separate sensor device for distance measurement and/or the configuration of the cameras as a time-of-flight camera.
- Provision may be made for a vehicle speed and/or an angular rate of the motor vehicle about a vehicle height axis to be captured to determine a characterizing movement sequence. On the basis of the vehicle speed and/or the angular rate, it is thus possible to determine the position of the object, proceeding from the first estimated position and the second estimated position for each time point at which an image is captured, in the case of a stationary object in the world coordinate system, that is to say for example the position of the reference object in the surrounding region relative to the motor vehicle, and to convert it into the image positions in the corresponding image coordinates of the image captured at that time point. Consequently, it is possible to determine for each image the image position of the reference object and thus a predetermined image position which the captured object in the image has if the captured object is stationary. The change in the predetermined image positions between two images gives the predetermined movement sequence, that is to say the movement sequence that characterizes a stationary object.
- In addition, the characterizing movement sequences are determined under the assumption that the motor vehicle travels on an even road. An even road is here a road that has neither bumps nor potholes. That means that it is assumed that the motor vehicle performs no pitch movement, i.e. no rotational movement about the vehicle transverse axis, during travelling. In other words, a pitch angle of the motor vehicle does not change during travel. As a result, the characterizing movement sequences can be determined without the need to go to the effort of capturing the pitch angle of the motor vehicle.
- In accordance with an embodiment, only image coordinates of a predetermined direction are compared during the comparison of the movement sequence of the captured object to the characterizing movement sequence. Preferably, only image coordinates in the horizontal direction are compared to one another. The invention is here based on the finding that, even if the captured object is a stationary object, the vertical image coordinates of the captured object would differ significantly from the vertical image coordinates of the characterizing movement sequences if the motor vehicle travels on an uneven road having bumps. Due to the uneven road, the motor vehicle performs a pitch movement during travel, as a result of which the captured object appears to move in the vertical direction in the image. This apparent vertical movement of the captured object in the image coordinates can result in the movement sequence of the captured object, determined on the basis of the image positions of the projection of the object, being outside the corridor formed by the two characterizing movement sequences. The characterizing movement sequences are specifically determined in particular under the assumption that the motor vehicle moves along an even road having no bumps. This means that the apparent vertical movement of the stationary object is not reflected in the characterizing movement sequences. For this reason, during the examination as to whether the movement sequence of the captured object is within the corridor, only the image coordinates in the horizontal direction are taken into consideration. The image coordinates in the vertical direction are not taken into consideration. As a result, an object can be reliably identified as a stationary object in particular without the need to separately capture the pitch angle even in the case of a road on which the motor vehicle performs a pitch movement, for example due to potholes.
- In a development of the invention, the position of the captured object is estimated on the basis of a first image captured at a current time point, and the movement sequence of the captured object is determined on the basis of at least one second image captured at a time point before the current time point. That means that the current position of the object in the world coordinates is estimated and, proceeding from the current position, the already performed movement of the captured object is determined. In other words, the movement sequence of the captured object is determined retrospectively, i.e. on the basis of the recorded history of image coordinates. The characterizing movement sequence is also hereby retrospectively determined, i.e. a movement that the captured object would have performed if it were a stationary object. As a result, the object can be classified particularly quickly as a stationary object or as a dynamic object.
- The object as a stationary, light-reflecting object, in particular a delineator post or a road sign, is preferably identified as the predetermined object. A light-reflecting object is here understood to mean an object that has an active element reflecting light. Such an element can be, for example, a reflector which reflects the light, such as for example like a cat's-eye. Such objects can be, for example, delineator posts or reflector posts, which are generally located at a side of the road of the motor vehicle and facilitate orientation for the driver of the motor vehicle in particular in the dark or at night or when visibility is poor. Such objects can also be road signs that actively reflect light for improved visibility. Due to the objects that actively reflect light, light that was emitted for example by a headlight of the motor vehicle to illuminate the road is reflected in a directional or defined manner by the object back to the motor vehicle and can therefore be erroneously detected by the image processing device, in particular at night, as an oncoming vehicle emitting light. Using the method in accordance with the invention, which can identify objects as being stationary or dynamic, it is advantageously possible to prevent such a mix-up from happening.
- Here, the object can be identified as a stationary light-reflecting object, in particular as a delineator post and/or as a road sign, if it has been detected, on the basis of the at least one captured first image and/or second image, that the captured object emits light in a directional or defined manner in the direction of the motor vehicle and it has been detected, on the basis of the comparison of the movement sequence of the captured light-emitting object to a movement sequence characteristic of a stationary object, that the captured object is stationary. In other words, it is determined, for example on the basis of the first detected image, whether the object emits light in a directional manner in the direction of the motor vehicle. It is also possible to estimate the position of the object on the basis of this image. Subsequently, the movement sequence of the captured light-emitting object can be determined and the object can be identified as being stationary on the basis of the comparison to the characterizing movement sequence.
- Preferably, regulation of a light emitted by a headlight of the motor vehicle for illuminating a road of the motor vehicle is blocked if the object was identified as a stationary object which actively reflects light. In other words, no automatic full-beam regulation, for example dipping of the light, is performed if it has been found that the captured object is not an oncoming vehicle but for example a delineator post or a road sign. It is thus possible to prevent the automatic full-beam regulation from being performed unnecessarily.
- The invention additionally relates to a driver assistance system, in particular for full-beam regulation, for identifying an object in a region surrounding a motor vehicle as a stationary object. The driver assistance system comprises a vehicle-side capture device, for example a camera, for capturing the surrounding region in images and an image processing device for recognizing the object in at least one of the captured images. Moreover, the image processing device is configured to estimate a first position of the object in the surrounding region relative to the motor vehicle on the basis of at least one first captured image, to determine a movement sequence of the object in image coordinates on the basis of the first image and at least one second captured image, to determine a first movement sequence that characterizes a stationary object in the image coordinates proceeding from the first estimated position in the surrounding region, and to identify the captured object as a stationary object on the basis of a comparison of the movement sequence of the captured object to the first characterizing movement sequence.
- A motor vehicle according to the invention comprises a driver assistance system according to the invention. The motor vehicle is configured in particular as a passenger vehicle. The driver assistance system can here control, for example, headlights of the motor vehicle to regulate the light emitted by the headlights.
- The preferred embodiments introduced with respect to the method according to the invention, and the advantages thereof, apply accordingly to the driver assistance system according to the invention and to the motor vehicle according to the invention.
- The terms “top”, “bottom”, “front”, “rear”, “horizontal” (a-direction), “vertical” (b-direction) etc. indicate positions and orientations based on appropriate observation of the captured images. The terms “internal”, “external”, “lateral”, “right”, “left”, “top”, “bottom”, “vehicle height axis” (z-direction), “vehicle longitudinal axis” (y-direction), “vehicle transverse axis” (x-direction) etc. indicate positions and orientations based on an observer standing in front of the vehicle and looking in the direction of the vehicle longitudinal axis.
- Further features of the invention can be gathered from the claims, the figures and the description of the figures. The features and feature combinations mentioned previously in the description and the features and feature combinations yet to be mentioned in the description of the figures and/or shown alone in the figures can be used not only in the combinations which are indicated in each case, but also in other combinations or alone, without departing from the scope of the invention. Configurations of the invention that are not explicitly shown and explained in the figures but can be gathered and realized through separate feature combinations from the explained configurations are also thus to be considered as being included and disclosed. Configurations and feature combinations that do not have all the features of an independent claim of original wording are consequently also to be considered to be disclosed.
- The invention will be explained in more detail below on the basis of a preferred exemplary embodiment and with reference to the attached drawings.
- In the drawings:
-
FIG. 1 shows a schematic illustration of an embodiment of a motor vehicle according to the invention; -
FIGS. 2a, 2b, 2c show schematic illustrations of image which were recorded by a vehicle-side capture device; and -
FIG. 3 shows a schematic illustration of an embodiment or determining the position of a captured object. - Identical elements and elements having identical functions are provided in the figures with the same reference numerals.
-
FIG. 1 shows a motor vehicle 1 having adriver assistance system 2. Thedriver assistance system 2 comprises a vehicle-side capture device 3, which can comprise at least one camera. Thecapture device 3 serves for capturing asurrounding region 4 of the motor vehicle 1 in two-dimensional images 12, 14 (seeFIGS. 2a, 2b, 2c ). The capture device captures in 12, 14 of aparticular images surrounding region 4 located—in the direction of travel (y-direction)—in front of and laterally in front of the motor vehicle 1 as the motor vehicle 1 travels on aroad 9. - The
driver assistance system 2 additionally comprises animage processing device 5, which is configured to process the 12, 14 of theimages surrounding region 4 captured by thecapture device 3 and to extract for example information from the captured 12, 14. Theimages driver assistance system 2, which serves in particular for full-beam regulation, can control, based on the information extracted from the 12, 14,images headlights 11 of the motor vehicle 1 in order to regulate or influence the light emitted by theheadlights 11. In addition, thedriver assistance system 2 can have a speed sensor 6 for capturing a speed of the motor vehicle 1 travelling on theroad 9 and an angular rate sensor 7 for capturing an angular rate of the motor vehicle 1 about a vehicle height axis (z-direction) of the motor vehicle 1. - The
image processing device 5 is configured for recognizing anobject 8 in at least one of the 12, 14 captured by theimages capture device 3. Theobject 8 has, in thesurrounding region 4, a position P(x, y, z) relative to the motor vehicle 1. The position P is here a position in a world coordinate system x, y, z, which is here defined as a vehicle coordinate system. The x-axis of the world coordinate system here runs along a vehicle lateral direction, the y-axis runs along a vehicle longitudinal direction, and the z-axis runs along a vehicle height direction. - The
image processing device 5 is configured to classify theobject 8, that is to say to identify whether theobject 8 is a stationary object that is positionally fixed in thesurrounding region 4, or a dynamic object. If the capturedobject 8 is a dynamic object, for example in the form of an oncoming vehicle, acontrol device 10 of thedriver assistance system 2 can, for example, control theheadlights 11 of the motor vehicle 1 to dip the light emitted by theheadlights 11 and thus prevent dazzling a driver of the oncoming vehicle. Regulating the light emitted by theheadlights 11 in this way should, however, be blocked if theimage processing device 5 has detected that theobject 8 is a stationary object, for example in the form of adelineator post 13 or a road sign. - Identifying the
object 8 as a stationary object, in particular as adelineator post 13, will be explained with reference toFIG. 2a ,FIG. 2b , andFIG. 2c .FIG. 2a shows animage 12 of thesurrounding region 4 recorded at a first time point by thecapture device 3. Thefirst image 12 shows thesurrounding region 4′ as a two-dimensional projection of thesurrounding region 4, theroad 9′ as a two-dimensional projection of theroad 9, and theobject 8′ as a two-dimensional projection of theobject 8 at the first, in particular current, time point. An image position P′ of theobject 8′ is indicated using image coordinates a, b of a two-dimensional image coordinate system. Theobject 8′ has in the image 12 a first image position P′(a1, b1). Based on the two-dimensionalfirst image 12, a first position P1(x1, y1, z1) of the real object 8 (seeFIG. 3 ) relative to the motor vehicle 1, that is to say a first, possible distance of theobject 8 from the motor vehicle 1, and a second position P2(x2, y2, z2) of thereal object 8 relative to the motor vehicle 1, that is to say a second, possible distance of theobject 8 from the motor vehicle 1, are estimated. The positions P1, P2 are here determined in the world coordinate system (x, y, z). - To determine the first position P1, a first, maximum height h1 for the
object 8 is specified or assumed, as shown inFIG. 3 . To determine the second position P2, a second, minimum height h2 for theobject 8 is assumed. The possible positions P1, P2 of thereal object 8 can then be determined on the basis of theimage 12 and the heights h1, h2. The actual position P(x, y, z) of theobject 8 is here located along acamera axis 20 between the two positions P1, P2. The heights h1, h2 can here be, for example, a typical specified maximum height and a typical specified minimum height of adelineator post 13. - To identify the captured
object 8 as a stationary object, first a movement sequence 15 (seeFIG. 2c ) of theobject 8′, 8″, that is to say amovement sequence 15 of the projection of the capturedobject 8, in the image coordinates a, b is determined. To this end, for example an image position P′ of theobject 8′ in thefirst image 12 and an image position P″(a2, b2) of theobject 8″ in asecond image 14, recorded at a second time point, can be determined. Thesecond image 14 shows thesurrounding region 4″ as a two-dimensional projection of thesurrounding region 4, theroad 9″ as a two-dimensional projection of theroad 9, and theobject 8″ as a two-dimensional projection of theobject 8 at the second time point. The second time point here occurs in particular before the first, current time point. The change in the image positions P′, P″ can be determined as a trajectory characterizing themovement sequence 15 and be represented, for example, in the first image 12 (seeFIG. 2c ). - In addition, proceeding from the first estimated position P1 and the second estimated position P2,
16, 17, which are characteristic of a stationary object, in image coordinates a, b are determined. The firstmovement sequences characterizing movement sequence 16 is here characteristic of a stationary object at the first, estimated position P1, and the secondcharacteristic movement sequence 17 is characteristic of a stationary object at the second, estimated position P2. The 16, 17 are determined for example by determining the image positions of a stationary reference object in thecharacterizing movement sequences 12, 14 in image coordinates a, b. To this end, for the image positions of the stationary reference object in theimages first image 12, the first position P1 and the second position P2 are converted into image coordinates a, b. To determine the image positions of the stationary reference object in thesecond image 14, a further, first position of the stationary reference object in world coordinates x, y, z and a further, second position of the stationary reference object in world coordinates x, y, z, which the reference object has at the second time point, are converted into image coordinates a, b. The further, first and the further, second positions can be determined, for example, on the basis of the speed and/or the angular rate of the motor vehicle 1, which is captured using the speed sensor 6 and/or using the angular rate sensor 7. In addition, the 16, 17 are determined under the assumption that thecharacterizing movement sequences road 9, on which the motor vehicle 1 travels, is even and has no potholes or bumps, for example. The 16, 17, which can likewise be represented as trajectories in the first image 12 (seemovement sequences FIG. 2c ), form acorridor 18. The capturedobject 8 is then identified as a stationary object if themovement sequence 15 of the capturedobject 8′, 8″ falls within thecorridor 18. -
FIG. 2c shows that the horizontal coordinates a of themovement sequence 15 are located within thecorridor 18, but not the vertical coordinates b of themovement sequence 15. Themovement sequence 15 shows the movement sequence of theobject 8′, 8″ for the motor vehicle 1 here as it travels on aroad 9, which has bumps. Due to these bumps, the motor vehicle 1 performs a pitch movement about the vehicle transverse axis, that is to say a pitch movement along the z-axis. Due to said pitch movement, theobject 8′, 8″ appears to move in the height direction (b-direction) in the image coordinates a, b. In order to be able to identify the capturedobject 8 as a stationary object despite the pitch movement and without having to capture the pitch movement, only the horizontal image coordinates a are considered or taken into consideration during the examination as to whether themovement sequence 15 falls within thecorridor 18. The vertical image coordinates b are not taken into consideration. - On the basis of the comparison of the
movement sequence 15 to the 16, 17, the capturedcharacterizing movement sequences object 8 can be identified as a stationary object. In order to identify theobject 8 as thedelineator post 13, it is additionally possible to examine whether theobject 8 actively reflects light in the direction of the motor vehicle 1. In order to actively reflect light, thedelineator post 13 hasreflectors 19. Theimage processing device 5 can, for example on the basis of thefirst image 12, detect that theobject 8′, 8″ emits light in the direction of the motor vehicle 1. Finally, theimage processing device 5 can capture theobject 8 as a stationary object on the basis of the 15, 16, 17 and thus identify themovement sequences object 8 correctly as thedelineator post 13. Once theobject 8 has been identified as adelineator post 13, it is possible for thecontrol device 10 to prevent theheadlights 11 from unnecessarily being controlled to regulate the full beam.
Claims (14)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102015112289.8 | 2015-07-28 | ||
| DE102015112289 | 2015-07-28 | ||
| DE102015112289.8A DE102015112289A1 (en) | 2015-07-28 | 2015-07-28 | Method for identifying an object in a surrounding area of a motor vehicle, driver assistance system and motor vehicle |
| PCT/EP2016/067715 WO2017017077A1 (en) | 2015-07-28 | 2016-07-26 | Method for identifying an object in a surrounding region of a motor vehicle, driver assistance system and motor vehicle |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20180211394A1 true US20180211394A1 (en) | 2018-07-26 |
| US10685447B2 US10685447B2 (en) | 2020-06-16 |
Family
ID=56511587
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/748,098 Active 2037-03-17 US10685447B2 (en) | 2015-07-28 | 2016-07-26 | Method for identifying an object in a region surrounding a motor vehicle, driver assistance system and motor vehicle |
Country Status (7)
| Country | Link |
|---|---|
| US (1) | US10685447B2 (en) |
| EP (1) | EP3329462B1 (en) |
| JP (1) | JP6661748B2 (en) |
| KR (1) | KR102052840B1 (en) |
| CN (1) | CN108027971B (en) |
| DE (1) | DE102015112289A1 (en) |
| WO (1) | WO2017017077A1 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112970028A (en) * | 2018-09-06 | 2021-06-15 | 罗伯特·博世有限公司 | Method for selecting image segments of a sensor |
| US11155248B2 (en) * | 2017-09-26 | 2021-10-26 | Robert Bosch Gmbh | Method for ascertaining the slope of a roadway |
| US20220089144A1 (en) * | 2020-09-24 | 2022-03-24 | Toyota Jidosha Kabushiki Kaisha | Home position estimation system and home position estimation method |
| US11544895B2 (en) * | 2018-09-26 | 2023-01-03 | Coherent Logix, Inc. | Surround view generation |
| US20240404298A1 (en) * | 2023-06-01 | 2024-12-05 | Hyundai Motor Company | Apparatus and method for determining a position of an object outside a vehicle |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102016118538A1 (en) | 2016-09-29 | 2018-03-29 | Valeo Schalter Und Sensoren Gmbh | Method for classifying a traffic sign in a surrounding area of a motor vehicle, computing device, driver assistance system and motor vehicle |
| KR102741044B1 (en) | 2019-03-29 | 2024-12-11 | 삼성전자주식회사 | Electronic apparatus and method for assisting driving of a vehicle |
| CN110223511A (en) * | 2019-04-29 | 2019-09-10 | 合刃科技(武汉)有限公司 | A kind of automobile roadside is separated to stop intelligent monitoring method and system |
| DE102019210300B4 (en) * | 2019-07-11 | 2024-11-28 | Volkswagen Aktiengesellschaft | Method and device for camera-based determination of a distance of a moving object in the environment of a motor vehicle |
| CN111353453B (en) * | 2020-03-06 | 2023-08-25 | 北京百度网讯科技有限公司 | Obstacle detection method and device for vehicle |
| DE102020111471A1 (en) | 2020-04-27 | 2021-10-28 | Bayerische Motoren Werke Aktiengesellschaft | Method and system for image recognition for an automated vehicle |
| KR20220026154A (en) * | 2020-08-25 | 2022-03-04 | 현대자동차주식회사 | Vehicle and controlling method of vehicle |
| EP4254382A4 (en) * | 2020-11-27 | 2023-10-04 | Nissan Motor Co., Ltd. | Vehicle assistance method and vehicle assistance device |
| CN120507118B (en) * | 2025-06-03 | 2025-12-09 | 河南永泰光电有限公司 | Performance evaluation method and system for vehicle-mounted optical lens |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140104408A1 (en) * | 2012-10-17 | 2014-04-17 | Denso Corporation | Vehicle driving assistance system using image information |
| US9064418B2 (en) * | 2010-03-17 | 2015-06-23 | Hitachi Automotive Systems, Ltd. | Vehicle-mounted environment recognition apparatus and vehicle-mounted environment recognition system |
| US20150310621A1 (en) * | 2012-10-29 | 2015-10-29 | Hitachi Automotive Systems, Ltd. | Image Processing Device |
| US20170270370A1 (en) * | 2014-09-29 | 2017-09-21 | Clarion Co., Ltd. | In-vehicle image processing device |
| US20180118100A1 (en) * | 2016-10-28 | 2018-05-03 | Volvo Car Corporation | Road vehicle turn signal assist system and method |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4052650B2 (en) * | 2004-01-23 | 2008-02-27 | 株式会社東芝 | Obstacle detection device, method and program |
| JP4244887B2 (en) * | 2004-08-31 | 2009-03-25 | 日産自動車株式会社 | Moving position prediction apparatus, moving position prediction method, collision determination apparatus, and collision determination method |
| DE102005017422A1 (en) * | 2005-04-15 | 2006-10-19 | Robert Bosch Gmbh | Driver assistance system with device for detecting stationary objects |
| JP4743037B2 (en) * | 2006-07-28 | 2011-08-10 | 株式会社デンソー | Vehicle detection device |
| DE102007054048A1 (en) * | 2007-11-13 | 2009-05-14 | Daimler Ag | Method and device for a driving light control of a vehicle |
| DE102008025457A1 (en) * | 2008-05-28 | 2009-12-03 | Hella Kgaa Hueck & Co. | Method and device for controlling the light output of a vehicle |
| DE102008063328A1 (en) | 2008-12-30 | 2010-07-01 | Hella Kgaa Hueck & Co. | Method and device for determining a change in the pitch angle of a camera of a vehicle |
| JP5310235B2 (en) * | 2009-04-27 | 2013-10-09 | 株式会社豊田中央研究所 | On-vehicle lighting control device and program |
| JP5152144B2 (en) * | 2009-10-07 | 2013-02-27 | 株式会社デンソーアイティーラボラトリ | Image processing device |
| EP2418123B1 (en) * | 2010-08-11 | 2012-10-24 | Valeo Schalter und Sensoren GmbH | Method and system for supporting a driver of a vehicle in manoeuvring the vehicle on a driving route and portable communication device |
| DE102010034139A1 (en) * | 2010-08-12 | 2012-02-16 | Valeo Schalter Und Sensoren Gmbh | Method for supporting a parking process of a motor vehicle, driver assistance system and motor vehicle |
-
2015
- 2015-07-28 DE DE102015112289.8A patent/DE102015112289A1/en not_active Withdrawn
-
2016
- 2016-07-26 JP JP2018504280A patent/JP6661748B2/en active Active
- 2016-07-26 EP EP16741958.9A patent/EP3329462B1/en active Active
- 2016-07-26 CN CN201680052950.6A patent/CN108027971B/en active Active
- 2016-07-26 US US15/748,098 patent/US10685447B2/en active Active
- 2016-07-26 KR KR1020187005670A patent/KR102052840B1/en active Active
- 2016-07-26 WO PCT/EP2016/067715 patent/WO2017017077A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9064418B2 (en) * | 2010-03-17 | 2015-06-23 | Hitachi Automotive Systems, Ltd. | Vehicle-mounted environment recognition apparatus and vehicle-mounted environment recognition system |
| US20140104408A1 (en) * | 2012-10-17 | 2014-04-17 | Denso Corporation | Vehicle driving assistance system using image information |
| US20150310621A1 (en) * | 2012-10-29 | 2015-10-29 | Hitachi Automotive Systems, Ltd. | Image Processing Device |
| US20170270370A1 (en) * | 2014-09-29 | 2017-09-21 | Clarion Co., Ltd. | In-vehicle image processing device |
| US20180118100A1 (en) * | 2016-10-28 | 2018-05-03 | Volvo Car Corporation | Road vehicle turn signal assist system and method |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11155248B2 (en) * | 2017-09-26 | 2021-10-26 | Robert Bosch Gmbh | Method for ascertaining the slope of a roadway |
| CN112970028A (en) * | 2018-09-06 | 2021-06-15 | 罗伯特·博世有限公司 | Method for selecting image segments of a sensor |
| US20210255330A1 (en) * | 2018-09-06 | 2021-08-19 | Robert Bosch Gmbh | Method for selecting an image detail of a sensor |
| US11989944B2 (en) * | 2018-09-06 | 2024-05-21 | Robert Bosch Gmbh | Method for selecting an image detail of a sensor |
| US11544895B2 (en) * | 2018-09-26 | 2023-01-03 | Coherent Logix, Inc. | Surround view generation |
| US20220089144A1 (en) * | 2020-09-24 | 2022-03-24 | Toyota Jidosha Kabushiki Kaisha | Home position estimation system and home position estimation method |
| US11772631B2 (en) * | 2020-09-24 | 2023-10-03 | Toyota Jidosha Kabushiki Kaisha | Home position estimation system and home position estimation method |
| US20240404298A1 (en) * | 2023-06-01 | 2024-12-05 | Hyundai Motor Company | Apparatus and method for determining a position of an object outside a vehicle |
Also Published As
| Publication number | Publication date |
|---|---|
| EP3329462A1 (en) | 2018-06-06 |
| KR102052840B1 (en) | 2019-12-05 |
| EP3329462B1 (en) | 2019-06-26 |
| CN108027971B (en) | 2022-03-01 |
| JP2018524741A (en) | 2018-08-30 |
| JP6661748B2 (en) | 2020-03-11 |
| WO2017017077A1 (en) | 2017-02-02 |
| KR20180035851A (en) | 2018-04-06 |
| DE102015112289A1 (en) | 2017-02-02 |
| US10685447B2 (en) | 2020-06-16 |
| CN108027971A (en) | 2018-05-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10685447B2 (en) | Method for identifying an object in a region surrounding a motor vehicle, driver assistance system and motor vehicle | |
| US8270676B2 (en) | Method for automatic full beam light control | |
| US20190012537A1 (en) | Method for identifying an object in a surrounding region of a motor vehicle, driver assistance system and motor vehicle | |
| JP5363085B2 (en) | Headlight control device | |
| JP4993322B2 (en) | Identification and classification of light spots around the vehicle by a camera | |
| US8310662B2 (en) | Method for detecting misalignment of a vehicle headlight using a camera | |
| EP3328069B1 (en) | Onboard environment recognition device | |
| JP5962850B2 (en) | Signal recognition device | |
| US20130027511A1 (en) | Onboard Environment Recognition System | |
| US20120275172A1 (en) | Vehicular headlight apparatus | |
| CN104732233A (en) | Method and apparatus for recognizing object reflections | |
| US9372112B2 (en) | Traveling environment detection device | |
| US9376052B2 (en) | Method for estimating a roadway course and method for controlling a light emission of at least one headlight of a vehicle | |
| JP6756507B2 (en) | Environmental recognition device | |
| US8965142B2 (en) | Method and device for classifying a light object located ahead of a vehicle | |
| DE102006055906A1 (en) | Retro reflector and vehicle light identifying method, involves classifying objects as retro-reflector or light, based on movement of light spots in image and time-dependant intensity gradient of spots and position of spots in image | |
| US10896337B2 (en) | Method for classifying a traffic sign, or road sign, in an environment region of a motor vehicle, computational apparatus, driver assistance system and motor vehicle | |
| US9885630B2 (en) | Method and device for analyzing a light emission of a headlight of a vehicle | |
| KR101180676B1 (en) | A method for controlling high beam automatically based on image recognition of a vehicle | |
| WO2019013253A1 (en) | Detection device | |
| KR20230026262A (en) | Methdo for high-beam assistance in motor veicle and high-beam assistant for motor vehicle |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: VALEO SCHALTER UND SENSOREN GMBH, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SERGEEV, NIKOLAI;REEL/FRAME:045074/0678 Effective date: 20180209 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |