WO2008141753A1 - Method for object recognition - Google Patents
Method for object recognition Download PDFInfo
- Publication number
- WO2008141753A1 WO2008141753A1 PCT/EP2008/003831 EP2008003831W WO2008141753A1 WO 2008141753 A1 WO2008141753 A1 WO 2008141753A1 EP 2008003831 W EP2008003831 W EP 2008003831W WO 2008141753 A1 WO2008141753 A1 WO 2008141753A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sub
- determined
- signal
- objects
- signal currents
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- the invention relates to a method for object recognition according to the preamble of claim 1.
- the recognition and spatial assignment of objects is an essential prerequisite for the realization of assistance systems in motor vehicles, which, for example, automatically brake depending on the situation or support a driver during braking. Particularly important for this purpose is the knowledge of the distance and the direction of movement of an object in front of the motor vehicle.
- a large number of sensors are frequently provided on modern vehicles, for example cameras, infrared sensors for the near and / or far infrared range or radar systems, each of which performs different tasks and for this purpose detects various physical measured variables.
- At least two signal currents are considered which are each supplied by a sensor, wherein at least two of the signal currents represent different physical measured variables having different imaging properties.
- one of the sensors may be a camera and the other a radar system, as well as combinations of near-infrared and far-infrared sensors are conceivable.
- At least one object hypothesis ie a potential object, is identified from each of the signal streams. Based on the object hypothesis, at least one feature is generated for each of the signal streams.
- At least one of the object hypotheses is evaluated on the basis of at least one of its features by means of at least one classifier and, if successful, assigned as object of one or more classes, for example recognized as a person or oncoming vehicle.
- a Depth map is a 3D representation of the object in its context. In this way, the distance and spatial position of the object relative to the sensors can be determined.
- Fig. 1 is a schematic representation of a data processing unit with two sensors, their signal currents and images and a depth map in the localization of a motor vehicle, and
- Fig. 2 is a schematic representation of a data processing unit with two sensors, their signal streams and images and a depth map in the localization of a person.
- FIG. 1 shows a data processing unit 1, for example arranged in a motor vehicle, which supplies signal currents Si, S 2 from at least two sensors 2.1, 2.2 become.
- the sensor 2.1 is a radar system which scans an area in front of the motor vehicle.
- the sensor 2.2 is for example a camera.
- the sensor current Si provides a two-dimensional image
- the sensor current S 2 provides a two-dimensional image
- edge detection can be an object hypothesis find OH 5, by evaluation with a classifier as the outline of an object O 2, z. B. a motor vehicle classified.
- the position of the thus classified objects Oi, O 2 in their respective figures 3.1, 3.2 is now determined, for example by determining one feature Mi, M 2 area centroid per image.
- Both objects Oi, O 2 apparently belong to the same object class "motor vehicles". Therefore, it is assumed that it is the same object Oi.
- a correspondence of the signal currents Si, S 2 is determined. From the correspondence, a depth map 4 is then determined, which represents a three-dimensional representation of the object Oi in a coordinate system to which the position of the sensors 2.1, 2.2 is known.
- the sensor 2.1 here is a far-infrared sensor, which is sensitive to warm surfaces in a cooler environment.
- the sensor 2.2 is a near-infrared sensor that provides the contours of objects.
- the object hypothesis OH 1 (eg oval warm surface for a head) recognized in the Figure 3.1 of the signal current Si from the sensor 2.1 is here the head of a person who is clearly aware of his surroundings, but also of clothed body parts of the object Oi "Person” takes off.
- the object hypothesis OH 2 recognized in the image 3.2 of the signal stream S 2 from the sensor 2.2 is the contour of the object Oi "person”.
- the object hypothesis OH 2 can be classified directly as object Oi "Person".
- a classification of the object hypothesis OHi takes place because of the relatively weak distinguishing power of an oval warm surface to other identified objects, preferably with the help of the classification features of the other object hypothesis OH 2 or already classified in Figure 3.2 object Oi "person”.
- the position of the thus classified object Oi in its respective figures 3.1, 3.2 is now determined, for example, by determining a respective feature Mi, M 2 , z. B. a centroid, the object hypotheses OHi, OH. 2 Based on a relationship between the determined positions of the features Mi, M 2 with each other, a correspondence of the signal currents Si, S 2 is determined.
- a correspondence of the signal currents Si, S 2 is determined.
- the classification of objects Oi to O n from object hypotheses OHi to OH n can be carried out separately for each signal stream Si, S 2 or with the object hypotheses OHi to OH n or their characteristics from several of the sensor streams Si, S 2 .
- the sensors 2.1, 2.2 can be arranged at a distance from each other.
- the combinations of features Mi, M 2 of different signal streams Si, S 2 and the determination of the correspondences can be automated and / or manually trained in a self-learning method.
- the spatial position of at least one of the objects Oi, O 2 can be tracked and in each case a flow vector can be determined which describes the course of a movement of the object Oi, O 2 and can be used to predict an expected movement. For example, a movement of a person at the edge of the road can be detected and it can be predicted whether he will enter the roadway.
- Groups of features Mi, M 2 for example texture properties, at least one of the objects 0 ⁇ , O 2 or at least one of the object hypotheses OHi to OH 5 of at least one of the signal streams Si, S 2 can be formed and the relationship of the position of the group relative to a feature Mi, M 2 or a group of one of the objects Oi, O 2 or one of the object hypotheses OHi to OH 5 in another of the signal currents Si, S 2 are determined.
- More than two signal currents Si to S n corresponding to many sensors 2.1 to 2.n can be evaluated and correspondences can be determined therein. Different types of features Mi, M 2 can be considered simultaneously.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Verfahren zur Objekterkennung Method for object recognition
Die Erfindung betrifft ein Verfahren zur Objekterkennung gemäß dem Oberbegriff des Anspruchs 1.The invention relates to a method for object recognition according to the preamble of claim 1.
Die Erkennung und räumliche Zuordnung von Objekten ist eine wesentliche Voraussetzung zur Realisierung von Assistenzsystemen in Kraftfahrzeugen, die beispielsweise situationsabhängig selbsttätig bremsen oder einen Fahrer beim Bremsen unterstützen. Besonders wichtig ist hierzu die Kenntnis über die Entfernung und die Bewegungsrichtung eines Objekts vor dem Kraftfahrzeug.The recognition and spatial assignment of objects is an essential prerequisite for the realization of assistance systems in motor vehicles, which, for example, automatically brake depending on the situation or support a driver during braking. Particularly important for this purpose is the knowledge of the distance and the direction of movement of an object in front of the motor vehicle.
An modernen Fahrzeugen ist häufig eine Vielzahl von Sensoren vorgesehen, beispielsweise Kameras, Infrarotsensoren für den Nah- und/oder Ferninfrarotbereich oder Radarsysteme, die jeweils verschiedene Aufgaben wahrnehmen und hierzu verschiedene physikalische Messgrößen erfassen.A large number of sensors are frequently provided on modern vehicles, for example cameras, infrared sensors for the near and / or far infrared range or radar systems, each of which performs different tasks and for this purpose detects various physical measured variables.
Es sind bereits Verfahren bekannt, bei denen ein Objekt mittels zweier gleichartiger Sensoren, beispielsweise zwei Kameras, erfasst, die Disparität in den Abbildungen der Sensoren ausgewertet und daraus die Entfernung zu dem Objekt bestimmt wird. Nachteilig ist hierbei, dass für die räumliche Zuordnung der Objekte jeweils Sensorpaare benötigt werden, die die gleichen physikalischen Messgrößen erfassen. Es ist daher eine Aufgabe der Erfindung, ein verbessertes Verfahren zur Objekterkennung anzugeben.Methods are already known in which an object is detected by means of two similar sensors, for example two cameras, the disparity in the images of the sensors is evaluated and from this the distance to the object is determined. The disadvantage here is that in each case sensor pairs are required for the spatial assignment of the objects, which detect the same physical parameters. It is therefore an object of the invention to provide an improved method for object recognition.
Die Aufgabe wird erfindungsgemäß gelöst durch ein Verfahren mit den Merkmalen des Anspruchs 1.The object is achieved by a method having the features of claim 1.
Vorteilhafte Weiterbildungen sind Gegenstand der Unteransprüche.Advantageous developments are the subject of the dependent claims.
Bei einem erfindungsgemäßen Verfahren zur Objekterkennung werden mindestens zwei Signalströme betrachtet, die von jeweils einem Sensor geliefert werden, wobei mindestens zwei der Signalströme voneinander verschiedene physikalische Messgrößen mit unterschiedlichen Abbildungseigenschaften repräsentieren. Beispielsweise kann einer der Sensoren eine Kamera und der andere ein Radarsystem sein, ebenso sind Kombinationen aus Nahinfrarot- und Ferninfrarotsensoren denkbar. Aus jedem der Signalströme wird mindestens eine Objekthypothese erstellt, d.h. ein potentielles Objekt, identifiziert. Auf der Grundlage der Objekthypothese wird für jeden der Signalströme mindestens ein Merkmal generiert. Mindestens eine der Objekthypothesen wird anhand mindestens eines ihrer Merkmale mittels mindestens eines Klassifikators bewertet und im Erfolgsfall als Objekt einer oder mehreren Klassen zugeordnet, beispielsweise als Person oder entgegenkommendes Fahrzeug erkannt. Mehrere Objekthypothesen können durch die Klassifikation als ein einziges Objekt erkannt werden. In einer Abbildung des jeweiligen Signalstroms wird eine Lage jedes der Objekte bestimmt. Aus einer Beziehung der Lagen der Objekte verschiedener Signalströme oder deren Merkmale untereinander werden Korrespondenzen zwischen den Signalströmen ermittelt. Aus dieser Korrespondenz wird eine Tiefenkarte erstellt. Eine Tiefenkarte ist eine 3D-Repräsentation des Objekts in seinem Kontext. Auf diese Weise kann die Entfernung und räumliche Lage des Objekts im Verhältnis zu den Sensoren ermittelt werden. War es bei bekannten Verfahren erforderlich, Korrespondenzen des selben Objekts oder gleiche Merkmale des selben Objekts in den Abbildungen zweier Signalströme aus einem Sensorpaar, das die gleiche physikalische Messgröße erfasst, zu ermitteln, ist es mit dem vorliegenden Verfahren möglich, Korrespondenzen und damit die räumliche Lage eines Objekts mittels der Signalströme aus Sensoren zu ermitteln, die für verschiedene physikalische Messgrößen empfindlich sind.In a method for object recognition according to the invention, at least two signal currents are considered which are each supplied by a sensor, wherein at least two of the signal currents represent different physical measured variables having different imaging properties. For example, one of the sensors may be a camera and the other a radar system, as well as combinations of near-infrared and far-infrared sensors are conceivable. At least one object hypothesis, ie a potential object, is identified from each of the signal streams. Based on the object hypothesis, at least one feature is generated for each of the signal streams. At least one of the object hypotheses is evaluated on the basis of at least one of its features by means of at least one classifier and, if successful, assigned as object of one or more classes, for example recognized as a person or oncoming vehicle. Several object hypotheses can be recognized by the classification as a single object. In an illustration of the respective signal current, a position of each of the objects is determined. Correspondences between the signal currents are determined from a relationship of the positions of the objects of different signal streams or their characteristics with one another. From this correspondence a depth map is created. A Depth map is a 3D representation of the object in its context. In this way, the distance and spatial position of the object relative to the sensors can be determined. If it was necessary in known methods to determine correspondences of the same object or the same features of the same object in the images of two signal streams from a sensor pair that detects the same physical measured variable, it is possible with the present method to correspondence and thus the spatial position of an object to be determined by means of the signal currents from sensors that are sensitive to different physical quantities.
Ausführungsbeispiele der Erfindung werden im Folgenden anhand von Zeichnungen näher erläutert.Embodiments of the invention are explained in more detail below with reference to drawings.
Dabei zeigen:Showing:
Fig. 1 eine schematische Darstellung einer Datenverarbeitungseinheit mit zwei Sensoren, deren Signalströmen und Abbildungen sowie eine Tiefenkarte bei der Lokalisierung eines Kraftfahrzeugs, undFig. 1 is a schematic representation of a data processing unit with two sensors, their signal currents and images and a depth map in the localization of a motor vehicle, and
Fig. 2 eine schematische Darstellung einer Datenverarbeitungseinheit mit zwei Sensoren, deren Signalströmen und Abbildungen sowie eine Tiefenkarte bei der Lokalisierung einer Person.Fig. 2 is a schematic representation of a data processing unit with two sensors, their signal streams and images and a depth map in the localization of a person.
Einander entsprechende Teile sind in allen Figuren mit den gleichen Bezugszeichen versehen.Corresponding parts are provided in all figures with the same reference numerals.
In Figur 1 ist eine Datenverarbeitungseinheit 1, beispielsweise in einem Kraftfahrzeug angeordnet, gezeigt, der Signalströme Si, S2 aus mindestens zwei Sensoren 2.1, 2.2 zugeführt werden. Der Sensor 2.1 ist im gewählten Beispiel ein Radarsystem, das einen Bereich vor dem Kraftfahrzeug abtastet. Der Sensor 2.2 ist beispielsweise eine Kamera.FIG. 1 shows a data processing unit 1, for example arranged in a motor vehicle, which supplies signal currents Si, S 2 from at least two sensors 2.1, 2.2 become. In the example chosen, the sensor 2.1 is a radar system which scans an area in front of the motor vehicle. The sensor 2.2 is for example a camera.
Der Sensorstrom Si liefert eine zweidimensionale AbbildungThe sensor current Si provides a two-dimensional image
3.1, in der mehrere, die Radarstrahlung stark reflektierende Objekthypothesen OHi bis OH4 identifiziert werden. Deren typische Anordnung deutet daraufhin, dass es sich um ein Objekt Oi, z. B. ein Kraftfahrzeug, handelt, von dem die Scheinwerfer sichtbar sind, was durch die Bewertung der Objekthypothesen OHi bis OH4 mittels eines geeigneten Klassifikators feststellbar ist.3.1, in which several, the radar radiation strongly reflecting object hypotheses OHi to OH 4 are identified. Their typical arrangement indicates that it is an object Oi, z. As a motor vehicle is, of which the headlights are visible, which can be determined by the evaluation of the object hypotheses OHi to OH 4 by means of a suitable classifier.
Der Sensorstrom S2 liefert eine zweidimensionale AbbildungThe sensor current S 2 provides a two-dimensional image
3.2. Durch Kantendetektion lässt sich eine Objekthypothese OH5 finden, die durch Bewertung mit einem Klassifikator als die Umrisse eines Objektes O2, z. B. eines Kraftfahrzeugs, klassifiziert wird. Die Lage der so klassifizierten Objekte Oi, O2 in ihren jeweiligen Abbildungen 3.1, 3.2 wird nun bestimmt, beispielsweise durch Bestimmung je eines Merkmals Mi, M2 Flächenschwerpunkt pro Abbildung. Beide Objekte Oi, O2 gehören offenbar derselben Objektklasse "Kraftfahrzeuge" an. Daher wird angenommen, dass es sich um ein und dasselbe Objekt Oi handelt. Anhand einer Beziehung zwischen den ermittelten Lagen der Objekte Oi, O2 bzw. ihrer Merkmale Mi1 M2 untereinander wird eine Korrespondenz der Signalströme Si, S2 ermittelt. Aus der Korrespondenz wird dann eine Tiefenkarte 4 bestimmt, die eine dreidimensionale Repräsentation des Objekts Oi in einem Koordinatensystem darstellt, zu dem die Lage der Sensoren 2.1, 2.2 bekannt ist.3.2. By edge detection can be an object hypothesis find OH 5, by evaluation with a classifier as the outline of an object O 2, z. B. a motor vehicle classified. The position of the thus classified objects Oi, O 2 in their respective figures 3.1, 3.2 is now determined, for example by determining one feature Mi, M 2 area centroid per image. Both objects Oi, O 2 apparently belong to the same object class "motor vehicles". Therefore, it is assumed that it is the same object Oi. Based on a relationship between the determined positions of the objects Oi, O 2 or their characteristics Mi 1 M 2 with each other, a correspondence of the signal currents Si, S 2 is determined. From the correspondence, a depth map 4 is then determined, which represents a three-dimensional representation of the object Oi in a coordinate system to which the position of the sensors 2.1, 2.2 is known.
Eine ähnliche Vorgehensweise ist bei der Lokalisierung einer Person denkbar, wie sie in Figur 2 gezeigt ist. Der Sensor 2.1 ist hier ein Ferninfrarotsensor, der sensitiv auf warme Flächen in einer kühleren Umgebung ausgebildet ist. Der Sensor 2.2 ist ein Nahinfrarotsensor, der die Konturen von Objekten liefert. Die in der Abbildung 3.1 des Signalstroms Si aus dem Sensor 2.1 erkannte Objekthypothese OH1 (z. B. ovale warme Fläche für einen Kopf) ist hier der Kopf einer Person, der sich deutlich von seiner Umgebung, aber auch von bekleideten Körperteilen des Objekts Oi "Person" abhebt. Die in der Abbildung 3.2 des Signalstroms S2 aus dem Sensor 2.2 erkannte Objekthypothese OH2 ist die Kontur des Objekts Oi "Person". Die Objekthypothese OH2 ist direkt als Objekt Oi "Person" klassifizierbar. Eine Klassifikation der Objekthypothese OHi findet wegen der relativ schwachen Unterscheidungskraft einer ovalen warmen Fläche zu anderen identifizierten Objekten vorzugsweise unter Zuhilfenahme der Klassifikationsmerkmale der weiteren Objekthypothese OH2 oder des in der Abbildung 3.2 bereits klassifizierten Objekts Oi "Person" statt.A similar approach is conceivable in locating a person, as shown in FIG. The sensor 2.1 here is a far-infrared sensor, which is sensitive to warm surfaces in a cooler environment. The sensor 2.2 is a near-infrared sensor that provides the contours of objects. The object hypothesis OH 1 (eg oval warm surface for a head) recognized in the Figure 3.1 of the signal current Si from the sensor 2.1 is here the head of a person who is clearly aware of his surroundings, but also of clothed body parts of the object Oi "Person" takes off. The object hypothesis OH 2 recognized in the image 3.2 of the signal stream S 2 from the sensor 2.2 is the contour of the object Oi "person". The object hypothesis OH 2 can be classified directly as object Oi "Person". A classification of the object hypothesis OHi takes place because of the relatively weak distinguishing power of an oval warm surface to other identified objects, preferably with the help of the classification features of the other object hypothesis OH 2 or already classified in Figure 3.2 object Oi "person".
Die Lage des so klassifizierten Objekts Oi in seinen jeweiligen Abbildungen 3.1, 3.2 wird nun bestimmt, beispielsweise durch Bestimmung jeweils eines Merkmals Mi, M2, z. B. eines Flächenschwerpunkts, der Objekthypothesen OHi, OH2. Anhand einer Beziehung zwischen den ermittelten Lagen der Merkmale Mi, M2 untereinander wird eine Korrespondenz der Signalströme Si, S2 ermittelt. Für die Objektklasse "Person" ist dabei bekannt, dass die Merkmale Mi, M2 (Flächenschwerpunkte) der Objekthypothese OHi (= Kopf bzw. Gesicht) und OH2 (= Kontur der Person) nicht zusammen fallen, was bei der Bildung der Korrespondenz berücksichtigt wird. Aus der Korrespondenz wird dann eine Tiefenkarte 4 bestimmt. Die Klassifikation von Objekten Oi bis On aus Objekthypothesen OHi bis OHn kann für jeden Signalstrom Si, S2 separat oder mit den Objekthypothesen OHi bis OHn oder deren Merkmalen aus mehreren der Sensorströme Si, S2 erfolgen.The position of the thus classified object Oi in its respective figures 3.1, 3.2 is now determined, for example, by determining a respective feature Mi, M 2 , z. B. a centroid, the object hypotheses OHi, OH. 2 Based on a relationship between the determined positions of the features Mi, M 2 with each other, a correspondence of the signal currents Si, S 2 is determined. For the object class "person" it is known that the features Mi, M 2 (centroids) of the object hypothesis OHi (= head or face) and OH 2 (= contour of the person) do not coincide, which takes into account in the formation of the correspondence becomes. From the correspondence then a depth map 4 is determined. The classification of objects Oi to O n from object hypotheses OHi to OH n can be carried out separately for each signal stream Si, S 2 or with the object hypotheses OHi to OH n or their characteristics from several of the sensor streams Si, S 2 .
Die Sensoren 2.1, 2.2 können voneinander beabstandet angeordnet sein.The sensors 2.1, 2.2 can be arranged at a distance from each other.
Die Kombinationen von Merkmalen Mi, M2 verschiedener Signalströme Si, S2 und die Bestimmung der Korrespondenzen können in einem selbstlernenden Verfahren automatisiert und/oder manuell trainiert werden.The combinations of features Mi, M 2 of different signal streams Si, S 2 and the determination of the correspondences can be automated and / or manually trained in a self-learning method.
Die räumliche Lage mindestens eines der Objekte Oi, O2 kann verfolgt und dabei jeweils ein Flussvektor bestimmt werden, der den Verlauf einer Bewegung des Objekts Oi, O2 beschreibt und zur Prognose einer zu erwartenden Bewegung genutzt werden kann. Beispielsweise kann eine Bewegung einer Person am Fahrbahnrand erfasst und prognostiziert werden, ob sie die Fahrbahn betreten wird.The spatial position of at least one of the objects Oi, O 2 can be tracked and in each case a flow vector can be determined which describes the course of a movement of the object Oi, O 2 and can be used to predict an expected movement. For example, a movement of a person at the edge of the road can be detected and it can be predicted whether he will enter the roadway.
Es können Gruppen von Merkmalen Mi, M2, beispielsweise Textureigenschaften, mindestens eines der Objekte 0χ, O2 oder mindestens einer der Objekthypothesen OHi bis OH5 mindestens eines der Signalströme Si, S2 gebildet und die Beziehung der Lage der Gruppe relativ zu einem Merkmal Mi, M2 oder einer Gruppe eines der Objekte Oi, O2 oder einer der Objekthypothesen OHi bis OH5 in einem anderen der Signalströme Si, S2 bestimmt werden.Groups of features Mi, M 2 , for example texture properties, at least one of the objects 0χ, O 2 or at least one of the object hypotheses OHi to OH 5 of at least one of the signal streams Si, S 2 can be formed and the relationship of the position of the group relative to a feature Mi, M 2 or a group of one of the objects Oi, O 2 or one of the object hypotheses OHi to OH 5 in another of the signal currents Si, S 2 are determined.
Es können mehr als zwei Signalströme Si bis Sn entsprechend vieler Sensoren 2.1 bis 2.n ausgewertet und Korrespondenzen darin ermittelt werden. Es können verschiedene Typen von Merkmalen Mi, M2 gleichzeitig berücksichtigt werden. More than two signal currents Si to S n corresponding to many sensors 2.1 to 2.n can be evaluated and correspondences can be determined therein. Different types of features Mi, M 2 can be considered simultaneously.
BezugszeichenlisteLIST OF REFERENCE NUMBERS
1 Datenverarbeitungseinheit 2.1 bis 2.n Sensor 3.1, 3.2 Abbildung 4 Tiefenkarte1 Data Processing Unit 2.1 to 2.n Sensor 3.1, 3.2 Figure 4 Depth Chart
Mi, M2 MerkmalMi, M 2 characteristic
OHi bis OHn ObjekthypotheseOHi to OH n object hypothesis
Oi, O2 ObjektOi, O 2 object
Si bis Sn Signalstrom Si to S n signal current
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102007024639A DE102007024639A1 (en) | 2007-05-24 | 2007-05-24 | Object e.g. motor vehicle, recognition method for supporting driver, involves determining correspondences between two of signal flows from relationship of position of objects, and providing depth map from one of correspondences |
| DE102007024639.2 | 2007-05-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2008141753A1 true WO2008141753A1 (en) | 2008-11-27 |
Family
ID=38806189
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2008/003831 Ceased WO2008141753A1 (en) | 2007-05-24 | 2008-05-13 | Method for object recognition |
Country Status (2)
| Country | Link |
|---|---|
| DE (1) | DE102007024639A1 (en) |
| WO (1) | WO2008141753A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2003021967A2 (en) * | 2001-09-04 | 2003-03-13 | Icerobotics Limited | Image fusion systems |
| WO2005038743A1 (en) * | 2003-10-16 | 2005-04-28 | Bayerische Motoren Werke Aktiengesellschaft | Method and device for visualising the surroundings of a vehicle |
-
2007
- 2007-05-24 DE DE102007024639A patent/DE102007024639A1/en not_active Withdrawn
-
2008
- 2008-05-13 WO PCT/EP2008/003831 patent/WO2008141753A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2003021967A2 (en) * | 2001-09-04 | 2003-03-13 | Icerobotics Limited | Image fusion systems |
| WO2005038743A1 (en) * | 2003-10-16 | 2005-04-28 | Bayerische Motoren Werke Aktiengesellschaft | Method and device for visualising the surroundings of a vehicle |
Non-Patent Citations (3)
| Title |
|---|
| AMDITIS A ET AL: "Multiple - sensor - collision avoidance system for automotive applications using an IMM approach for obstacle tracking", INFORMATION FUSION, 2002. PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFE RENCE ON JULY 8-11, 2002, PISCATAWAY, NJ, USA,IEEE, vol. 2, 8 July 2002 (2002-07-08), pages 812 - 817, XP010594274, ISBN: 978-0-9721844-1-0 * |
| ANDREONE L ET AL: "A new driving supporting system, integrating an infrared camera and an anti-collision micro-wave radar: the EUCLIDE project", INTELLIGENT VEHICLE SYMPOSIUM, 2002. IEEE JUN 17-21, 2002, PISCATAWAY, NJ, USA,IEEE, vol. 2, 17 June 2002 (2002-06-17), pages 519 - 526, XP010635877, ISBN: 978-0-7803-7346-4 * |
| MAHLISCH M ET AL: "Sensorfusion Using Spatio-Temporal Aligned Video and Lidar for Improved Vehicle Detection", INTELLIGENT VEHICLES SYMPOSIUM, 2006 IEEE MEGURO-KU, JAPAN 13-15 JUNE 2006, PISCATAWAY, NJ, USA,IEEE, 13 June 2006 (2006-06-13), pages 424 - 429, XP010937050, ISBN: 978-4-901122-86-3 * |
Also Published As
| Publication number | Publication date |
|---|---|
| DE102007024639A1 (en) | 2008-01-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3084466B1 (en) | Method for detecting a mark applied to an underlying surface, driver assistance device and motor vehicle | |
| EP2753533B1 (en) | Determination of the position of structural elements of a vehicle | |
| WO2019174682A1 (en) | Method and device for detecting and evaluating roadway conditions and weather-related environmental influences | |
| EP2289044B1 (en) | Image data visualization | |
| DE102012112724A1 (en) | Method for determining a road condition from environmental sensor data | |
| EP3044727B1 (en) | Method and device for detecting objects from depth-resolved image data | |
| EP1928687A1 (en) | Method and driver assistance system for sensor-based driving off control of a motor vehicle | |
| DE102015220252A1 (en) | Lane keeping assistance system and procedures for this | |
| DE102015223176A1 (en) | Method and device for determining occlusion areas in the vehicle environment of a vehicle | |
| DE102015116542A1 (en) | Method for determining a parking area for parking a motor vehicle, driver assistance system and motor vehicle | |
| DE102017223160B4 (en) | Method for detecting at least one object lying on a motor vehicle and control device and motor vehicle | |
| DE102011105074A1 (en) | Method for determining visual range for vehicle, involves determining surrounding of vehicle by camera, where contrast dimensions are determined for object depicted in images, where visual range is determined from contrast dimensions | |
| DE102013022076A1 (en) | Method for determining a width of a target vehicle by means of a camera system of a motor vehicle, camera system and motor vehicle | |
| WO2005081200A2 (en) | Detection device for a motor vehicle | |
| WO2010099847A1 (en) | Method and device for determining visibility range for a vehicle | |
| DE102004046101A1 (en) | Method for early detection of motor vehicle collision involves estimation of collision danger on the basis of determined time to collision and introducing collision reducing measures with exceeding of predetermined collision danger | |
| DE102019008089A1 (en) | Method for detecting a change of lane of another motor vehicle by means of a detection device and detection device | |
| DE102013021840A1 (en) | Method for generating an environment model of a motor vehicle, driver assistance system and motor vehicle | |
| DE102011055441A1 (en) | Method for determining spacing between preceding and forthcoming motor cars by using mono camera in e.g. adaptive cruise control system, involves determining spacing between cars based on information about license plate number | |
| DE102012018471A1 (en) | Method for detecting e.g. lane markings of lane edge for motor car, involves performing classification in image region of individual images, which are detected by cameras, and in another image region of disparity images | |
| DE102017213692B4 (en) | Method, device, mobile user device, computer program for machine learning for a sensor system of a vehicle | |
| DE102006037600B4 (en) | Method for the resolution-dependent representation of the environment of a motor vehicle | |
| DE102019118106B4 (en) | Method for determining a visibility | |
| WO2008141753A1 (en) | Method for object recognition | |
| EP1962245B1 (en) | Method and device for detecting the movement state of objects |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08758490 Country of ref document: EP Kind code of ref document: A1 |
|
| DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
| DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 08758490 Country of ref document: EP Kind code of ref document: A1 |