WO2016012289A1 - Procédé de détermination de pixels caractéristiques pour un système de caméra d'un véhicule à moteur, système de caméra, système d'assistance au conducteur et véhicule à moteur - Google Patents
Procédé de détermination de pixels caractéristiques pour un système de caméra d'un véhicule à moteur, système de caméra, système d'assistance au conducteur et véhicule à moteur Download PDFInfo
- Publication number
- WO2016012289A1 WO2016012289A1 PCT/EP2015/065928 EP2015065928W WO2016012289A1 WO 2016012289 A1 WO2016012289 A1 WO 2016012289A1 EP 2015065928 W EP2015065928 W EP 2015065928W WO 2016012289 A1 WO2016012289 A1 WO 2016012289A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- characteristic pixels
- camera
- partial areas
- motor vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Definitions
- the invention relates to a method for determining characteristic pixels for a camera system of a motor vehicle.
- an image sequence of an environmental region of the motor vehicle which includes a temporal sequence of images, is captured by means of a camera of the camera system.
- a predetermined number of partial images of one of the images of the image sequence is selected by means of an image processing device of the camera system.
- the invention relates to a camera system for a motor vehicle, which is formed for performing such a method, to a driver assistance system with such a camera system as well as to a motor vehicle with such a driver assistance system.
- characteristic pixels are determined by means of a detector of the camera system by means of an image processing device of the camera system.
- the detector is adapted to detect characteristic or prominent pixels in the image and extract them from the image, respectively.
- the detector is also known under the designation interest operator and aims at the extraction of certain high- frequency pixels or high saliency pixels of the image.
- Known detectors are for example the Harris operator, the SIFT operator or the FAST operator. The enumerated operators all have the same target, namely the extraction of corners or salient features in the image.
- a corner can for example be the point of intersection of two edges in the image, which preferably has a great contrast. These edges can for example be visualized by deriving the image in the form of a gradient image.
- detectors such as the FAST operator which do not use gradient images.
- the characteristic pixels are detected in the entire image. This may lead to the result that a distribution of the characteristic pixels in the image occurs very unevenly. In areas of the image with many corners and high contrast, many characteristic pixels are detected, while in areas of the image with few corners and low contrast, few characteristic pixels are detected.
- a grid can be used, which divides the image into the partial areas.
- the most intense characteristic pixels are selected from each of the partial areas. For example, this is known from the conference contribution " Grid-Based Spatial Keypoint Selection for Real Time Visual Odometry'Of V. Nannen and G. Oliver, 2nd International Conference on Pattern Recognition Applications and Methods, Barcelona, 2013. There, a method is shown, in which the characteristic pixels are detected depending on a grid. This grid divides the image into partial areas. The intensity of the characteristic pixels can be determined by a confidence value, which is returned by the detector for each
- characteristic pixel The selection of the characteristic pixels is now effected such that a substantially identical number of the characteristic pixels is present for each partial area.
- the disadvantage remains that these characteristic pixels, which are later available for the selection process, first of all have to be calculated. This means that computing time has to be expended for characteristic pixels, which do not find use in the further method anymore.
- this object is solved by a method, by a camera system, by a driver assistance system as well as by a motor vehicle having the features according to the respective independent claims.
- Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
- a method for determining characteristic pixels for a camera system of a motor vehicle includes capturing an image sequence of an environmental region of the motor vehicle, which includes a temporal sequence of images, by means of a camera of the camera system, selecting a predetermined number of partial areas of one of the images of the image sequence by means of an image processing device of the camera system, wherein the following steps are performed for at least one of the partial areas: presetting a target value for a number of the characteristic pixels, determining the characteristic pixels of a first image of the image sequence by means of a detector of the camera system depending on a parameter of the detector, which describes a sensitivity of the detector in calculating the characteristic pixels, determining the number of the characteristic pixels, comparing the number of the characteristic pixels to the target value, adapting the parameter of the detector depending on the comparison and determining the characteristic pixels of a second image of the image sequence depending on the respectively adapted parameters.
- the method serves for determining characteristic pixels or characteristic features in respective images of an image sequence.
- the image sequence is provided by a camera of the camera system.
- the camera is a video camera, which is able to provide a plurality of images (frames) per second.
- the camera can be a CCD camera or a CMOS camera or any other suitable imaging device.
- the camera can also be a thermal sensor such as a microbolometer which provides images in the infrared spectrum.
- a first image is divided into a plurality of partial areas. For at least one of the partial areas, a target value for a number of the characteristic pixels is predetermined. Therein, a target value can also be preset for each partial area of the first image.
- the determination is dependent on a parameter of the detector, which describes a sensitivity of the detector in calculating the characteristic pixels.
- the sensitivity of the detector in particular describes how severely a characteristic pixel has to be marked in order to be detected as such. In this case, a severe markedness means in particular a great difference of the contrast of the image in connection with a corner or edge of the image.
- the used parameter is dependent on the used algorithm of the detector, several parameters can also be used to adjust the sensitivity of the detector.
- the number thereof is determined.
- the number of the characteristic pixels can occur by counting these characteristic features in each partial area. After determining the number of the
- the characteristic pixels this number is compared to the target value.
- the parameter of the detector is adapted depending on the comparison.
- the characteristic pixels are calculated for the same partial area of a second image of the image sequence depending on the adapted parameter.
- the number of the characteristic pixels in the partial area of the second image corresponds to the target value or is closer to the target value than in the first image.
- the method according to the invention it becomes possible to ensure a uniform distribution of the characteristic pixels across the entire image due to the partial areas. Due to the parameter of the detector adapted for each of the images of the image sequence, the calculation of the characteristic pixels can be restricted to that number, which is intended or desired. In other words, the method allows a spatially and temporally adapted determination or calculation of the characteristic pixels. In an embodiment, it is provided that the same target value for the number of the characteristic pixels is preset for each of the partial areas. This means that a uniform distribution of the characteristic pixels across the image can thereby be ensured.
- the parameter of the detector is adapted such that the sensitivity of the detector is greater than a predetermined minimum value.
- a predetermined minimum value is provided.
- one or more parameters of the detector controlling the sensitivity of the detector can be formed such that the result of detection, namely the extraction of the characteristic pixels, provides a reasonable result. If the detector is adjusted very sensitive due to its parameters, this results in only image noise being detected. Interferences are understood by image noise, which do not have any relation to the actual scene content.
- the advantage of the predetermined minimum value is therefore in that the image noise does not provide a contribution to the number of the characteristic pixels and the characteristic pixels can be reliably determined.
- a region of interest is preset in the respective image and the partial areas are selected in the region of interest of the respective image.
- the region of interest can for example be an area of the image, which has high information content.
- parts of the motor vehicle such as for example a number plate are depicted in the image due to a characteristic of a lens of the camera, in particular a fish eye lens. These parts of the motor vehicle also do not contain important information and can be excluded from the region of interest.
- it is advantageous in the selection of the region of interest that the characteristic pixels are determined exclusively in an area with high information content. This reduces the required computational effort for determining the characteristic pixels and at the same time reduces the error rate in determining the characteristic pixels.
- the partial areas are selected by dividing the respective image by means of a grid.
- a grid is put on the respective image and thus the individual partial areas are determined. This has the advantage that the position of the partial areas can be very precisely described. Furthermore, the partial areas can also be described by complicated geometric shapes of the grid.
- the respective region of interest is divided into identically sized partial areas by the grid.
- the region of interest can be divided such that all of the partial areas have the same dimensions.
- the identically sized partial areas have the advantage that the computing time for calculating or determining the characteristic pixels is better predictable and can range substantially in the same order of magnitude for all of the partial areas.
- the partial areas are selected depending on a field of view of the camera, which contains exclusively information about the environmental region.
- this can mean that the intended region of interest contains more than only the environmental region of the motor vehicle.
- the image has been transformed and the edge area of the image has been filled with the filling pixels.
- the filling occurs for that the image is present in rectangular shape.
- the rectangular shape in turn facilitates further processing the image. If exclusively partial areas of the field of view of the camera are now used, the computational effort can be reduced in calculating the characteristic pixels.
- an image size of each of the partial areas is set depending on an angle of the respective partial area to an optical axis of the camera by means of the image processing device in selecting the partial areas.
- This approach is advantageous because the image has certain biases or distortions depending on a lens of the camera. These distortions are usually the greater the farther one of the pixels is from the optical axis of the camera or the farther one of the pixels is to the edge of the image. These distortions can also be particularly severely marked if the lens includes a special lens, in particular a fish eye lens.
- the image is transformed for subtracting out or removing the distortions. This transformation results in a different number of pixels being able to be present from the respective partial area. This is also dependent on the distance of the partial area to the optical axis of the camera.
- a three- dimensional grid is transformed from a world coordinate system into a two-dimensional image coordinate system of the image.
- the three-dimensional grid is imaginarily spanned in the space or in the environmental region of the motor vehicle and each partial area for example covers an identically sized area in the real world or in the world coordinate system.
- geometric characteristics of the camera are taken into account and the partial areas in the two-dimensional image coordinate system or in the image each include the same number of pixels, which are also already encompassed by the respectively corresponding partial areas in the world coordinate system.
- the determination of the characteristic pixels of the partial area by the image processing device is performed in an internal memory of a digital signal processing device of the image processing device.
- the digital signal processing device includes an internal memory (on-chip memory). It is connected to an external memory of the image processing device by means of a bus.
- the internal memory usually has a low storage volume compared to the external memory.
- the access times by the digital signal processing device to the internal memory are shorter than to the external memory. This is because the bus connecting the external memory acts as a limiting factor on the data throughput.
- the advantage by the partial areas is now that the entire partial area can each be shifted into the internal memory to determine the characteristic pixels.
- the image includes such a data size, which does not allow shifting the entire image into the internal memory.
- the entire image can be stored in the external memory, while the partial areas of the image are shifted into the internal memory for processing or for determining the characteristic pixels.
- This approach is also referred to as a block-based memory transfer.
- the grid and thereby the size of the partial areas can be set such that the memory size of the internal memory is exactly sufficient for each one of the partial areas.
- the characteristic pixels are calculated with a corner detection method, in particular with a FAST algorithm.
- the corner detection methods usually provide particularly prominent characteristic pixels.
- the FAST algorithm (Features from Accelerated Segment Test) is known for the fact that the characteristic pixels can be particularly fast determined. This has the advantage that the characteristic pixels can be determined in real time in image sequences with a high frame rate.
- the FAST algorithm does not lead to any discontinuities at the borders of the partial area.
- the provided method is not restricted to corner detection methods or the FAST algorithm.
- An edge detection method for example a Canny edge detector, can also be used.
- a Harris detector or a Forstner detector can also be used to determine the characteristic pixels.
- the present method can be performed with any detector for determining the characteristic pixels.
- a camera system according to the invention for a motor vehicle includes at least one camera for providing a sequence of images of an environmental region of the motor vehicle and an image processing device adapted to perform the method according to the invention.
- a driver assistance system includes a camera system according to the invention.
- a motor vehicle according to the invention includes a driver assistance system according to the invention.
- the driver assistance system is an electronic auxiliary device for assisting a driver in certain driving situations. Furthermore, the driver assistance system warns the driver during or shortly before critical traffic situations by a suitable human- machine interface.
- FIG. 1 in schematic plan view a motor vehicle with a camera system including a camera and an image processing device;
- Fig. 2 a flow diagram of a method according to an embodiment of the invention
- Fig. 3 in schematic illustration an image of an environmental region of the motor vehicle provided by the camera, wherein partial areas are selected by means of a grid;
- Fig. 4 in schematic illustration the motor vehicle in side view and a three- dimensional grid in a world coordinate system of the environmental region;
- FIG. 5 in schematic illustration the image according to Fig. 3, wherein the
- Fig. 6 in schematic illustration the image according to Fig. 3, wherein the partial areas are selected depending on a field of view of the camera;
- Fig. 7 in schematic illustration a part of an image processing device, which
- a plan view of a motor vehicle 1 with a camera system 2 is schematically illustrated.
- the camera system 2 includes a camera 3 and an image processing device 4, which can for example be integrated in the camera 3.
- this image processing device 4 can also be a component separate from the camera 3, which can be disposed in any position in the motor vehicle 1 .
- the camera 3 is disposed at the rear of the motor vehicle 1 and captures an environmental region 5 behind the motor vehicle 1 .
- an application with a front camera or a lateral camera or a camera at any other location on the motor vehicle 1 is also possible.
- the camera 3 has a horizontal capturing angle a, which can for example have a horizontal opening range between 120°and 200° and a vertical capturing angle (not illustrated), which can for example extend from the surface of a road directly behind the motor vehicle 1 up to the horizon and beyond. These characteristics are for example allowed by a fish eye lens of the camera 3.
- the camera 3 can be a CMOS camera or else a CCD camera or any image capturing device, by which characteristic pixels 18 in the environmental region 5 can be detected.
- the camera 3 is a video camera, which continuously captures an image sequence or a sequence of images 6.
- the image processing device 4 then processes the image sequence in real time and can determine the characteristic pixels 18 for each image 6 of the image sequence based on this image sequence.
- the camera system 2 is for example a part of a driver assistance system or of an object recognition system, which monitors the environmental region 5 based on the detected characteristic pixels 18 and can warn a driver of the motor vehicle 1 of a collision with the output of a corresponding warning signal.
- the camera system 2 can also be a part of a system, by which a posture or a position of the motor vehicle 1 can be determined. The position determination can also be effected based on the detected characteristic pixels 18 over the image sequence. The principle of odometry underlies this approach.
- the position of the motor vehicle 1 which was originally provided by a global satellite navigation system, can be improved and/or more accurately determined with the aid of the camera system 2 if this global satellite navigation system is no longer available or only available in limited manner. For example, this can be the case in areas, which make reception of satellite signals impossible. A typical situation for this is passing through a tunnel.
- Fig. 2 shows a flow diagram of the method according to the invention.
- a predetermined number of partial areas 7 are selected from a first image 6.
- the first image 6 is a part of an image sequence with temporally consecutive images 6.
- An initialization is each performed for the partial areas 7.
- the initialization includes determining a target value N des , which specifies a desired number of the characteristic pixels 18 for the respective partial area 7.
- a loop for determining the characteristic pixels 18 is started for each partial area 7.
- the characteristic pixels 18 in the respective partial area 7 are determined by means of a detector.
- the detector is adapted to extract prominent pixels such as corners or edges in the partial area 7. The determination occurs depending on a parameter Th fM , which describes a sensitivity of the detector in calculating the
- the characteristic pixels 18 acquired with the detector are output in the form of a list.
- the list can for example be stored in a memory of the image processing device 4. This list can be provided for further processing, in particular for object recognition or for odometry, for each partial area 7 of the first image 6.
- a step S5 follows to step S3, in which a number N dp of the characteristic pixels 18 is determined.
- the characteristic pixels 18 determined with the detector are counted.
- the list with the characteristic pixels can be evaluated by means of the image processing device 4.
- the parameter Th fM is adapted.
- a limit value factor Th fac is determined, based on which the parameter Th fM can be determined.
- the limit value factor Th fac is determined depending on the number N dp of the detected characteristic pixels 18 and the target value N des . This can be mathematically described as follows:
- Th fac dp des ⁇ (1 )
- a step S6 the determined number N dp of the characteristic pixels 18 is compared to the target value N des .
- the adaptation of the parameter Th fM is effected either in a step S7a or in a step S7b, according to whether the number N dp is above or below the target value N des .
- the result of this comparison is used in calculating an intermediate value Th raw for the parameter Th fN . This can be determined according to the following formulas:
- Th raw Thini - Thini * Th fac , if N dp > N des . (2) In the other case, it applies:
- Th raw Thmi + Thini * Th fac , if N dp ⁇ N des . (3)
- Th ini is an initialization value for the parameter Th fM , which is first used as a parameter for determining the characteristic pixels 18.
- the initialization value Th ini is in particular used in determining the characteristic pixels 18 of the first image 6 of the image sequence that means the initialization value Th ini is in particular used in the first iteration of the algorithm.
- the initialization value Th ini can result from a basic setting of the detector.
- the intermediate value Th raw is decremented in step S7b.
- the intermediate value Th raw is incremented in step S7a.
- Whether the intermediate value Th raw is incremented or decremented depends on whether the parameter is adapted such that for the next image 6 of the image sequence presumably less characteristic pixels 18 are detected or whether the number N dp is below the target value N des and the parameter is adapted such that for the next image 6 of the image sequence presumably more characteristic pixels are detected.
- Th raw is an intermediate value for the parameter Th fM of the detector for applying for the next image 6 of the image sequence.
- the parameter Th fM for the next image 6 of the image sequence is calculated.
- the parameter Th fM is determined depending on Th ini and Th raw .
- the calculation of Th fM can be represented as follows:
- Thfi (Wfii * Th ini ) + (1 - Wfii) * Th raw . (4a)
- Equation 4a can be used in particular for the first iteration step.
- the calculation of Th fM can be represented as follows:
- Th fi (W fi , * Th,ii) + (1 - Wfii) * Th raw . (4b)
- Wfii corresponds to a weighting factor, which performs weighting in the range from 0 to 1 .
- This weighting factor W fN is used to reduce oscillations of the target value N des .
- the method is not restricted to this described weighting factor W fi i, but all other weighting factors, which are identical in effect, can also be applied.
- the parameter Th fN is determined such that the sensitivity of the detector is greater or equal to a predetermined minimum value Th min .
- the minimum value Th min is required since the detector otherwise returns the characteristic pixels 18 in the manner, which correspond to image noise or an accidentally detected pixel. If the sensitivity of the detector is too low, such pixels are also detected as the characteristic pixels 18, which actually do not have this property. The characteristic pixel 18 can therefore no longer be uniquely associated in a subsequent image 6 of the image sequence. This can be described with the following formula:
- Thfii Th min , if Thfi, ⁇ Thmin- (5)
- Th max a maximum value Th max can also be determined with the same intent. If one of the detectors outputs less characteristic pixels 18 by increasing the parameter Th fM , thus the sensitivity of the detector is increased, the maximum value Th max can be used. The maximum value Th max is then applied as follows:
- Thfi, Th max j if Th f ii > Th max- (6)
- step S10 the previously determined parameter Th fN is now provided for the respective partial area 7, in order to use this partial area 7 for the next image 6 of the image sequence. Subsequently, the method is again continued in step S2.
- step S3 proceeding detection of the characteristic pixels 18 in step S3 is performed for the respective partial area 7 of the next image 6 now based on the new parameter Th fM .
- These new detected characteristic pixels 18 of the next image 6 are also - as already previously mentioned - provided for any further processing with step S4.
- Fig. 3 shows an application of the camera system 2 according to the invention, in which the camera 3 is formed as a rear-view camera.
- an image 6 of an image sequence which is output with the rear-view camera, is illustrated.
- a region of interest 10 is selected.
- This region of interest 10 extends to an area of the image 6, which exclusively contains information about the environmental region 5 of the motor vehicle 1 .
- pixels also occur in the image 6, which do not include information about the environmental region 5 of the motor vehicle 1 .
- filling pixels can be present in the image 6.
- the filling pixels can for example have been added by a transformation of the image 6.
- the image 6 shows areas of the motor vehicle 1 , such as for example a number plate.
- the partial areas 7 are determined or selected by means of a grid 1 1 .
- the partial areas 7 are of equal size, i.e. they cover an equally sized area in the image 6.
- a three-dimensional grid 12 in a world coordinate system of the environmental region 5 determines the selection of the partial areas.
- the three-dimensional grid 12 ensures a uniform coverage of the
- Fig. 5 shows the grid 1 1 , which has arisen by a
- Fig. 6 it is provided that not all of the respective partial areas 7 within the region of interest 10 are used, and thus the characteristic pixels 18 are determined not in all of the partial areas 7 of the region of interest 10.
- the partial areas 7, which thus are not in a field of view of the camera 3, are not taken into account. This is performed because thereby the entire area of the image 6 can be covered by the respective partial areas 7, which exclusively contains information about the environmental region 5, and at the same time an advantageous shape of the partial areas 7 can be selected for the calculation of the characteristic pixels 18.
- An advantageous shape for example exists if the partial areas 7 can be calculated by means of two rows and two columns of coordinates of the image 6.
- deactivated partial areas 13 are shown, which differ from the partial areas 7 to the effect that none of the characteristic pixels 18 are determined in the deactivated partial areas 13.
- Fig. 7 shows the image processing device 4 including a digital signal processing device 14 with an internal memory 15 and an external memory 16.
- the external memory 16 is connected to the digital signal processing device 14 with a bus 17 and thus data can be transmitted from the external memory 16 to the internal memory 15 and vice versa.
- the respective images 6 of the image sequence are stored in the external memory 16 after capture by the camera 3 and the respective partial area 7 is each completely transmitted to the internal memory 15 via the bus 17.
- the digital signal processing device 14 can now perform the determination of the characteristic pixels 18 based on the partial area 7 stored in the internal memory 15. Subsequently, the transmission of the information about the characteristic pixels 18 back into the external memory 16 is effected. Now, the internal memory 15 is available for the next partial area 7.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un procédé de détermination de pixels caractéristiques (18) pour un système de caméra (2) d'un véhicule à moteur (1). Une séquence d'images d'une région avoisinante (5) du véhicule à moteur (1) est capturée, celle-ci comprenant une séquence temporelle d'images (6), au moyen d'une caméra (3) du système de caméra (2). En outre, un nombre prédéterminé de zones partielles (7) de l'une des images de la séquence d'images est sélectionnée au moyen d'un dispositif de traitement d'image (4) du système de caméra (2) (S1), les étapes suivantes étant effectuées pour au moins l'une des zones partielles (7) : une valeur cible (Ndes) pour un certain nombre (Ndp) des pixels caractéristiques (18) est prédéfinie. En outre, les pixels caractéristiques (18) d'une première image (6) de la séquence d'images sont déterminés au moyen d'un système détecteur du système de caméra (2) en fonction d'un paramètre (Thfil) du détecteur, qui décrit une sensibilité du détecteur dans le calcul des pixels caractéristiques (18) (S3). En outre, le nombre (Ndp) des pixels caractéristiques (18) (S5) est déterminé. Le nombre (Ndp) de pixels caractéristiques (18) est comparé à la valeur cible (Ndes) (S6) et le paramètre (Thfil) du détecteur est adapté en fonction de la comparaison (S7a, S7b, S8, S9). Enfin, les pixels caractéristiques (18) d'une seconde image de la séquence d'images sont déterminés en fonction des paramètres adaptés respectifs (Thfil).
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102014110527.3 | 2014-07-25 | ||
| DE102014110527.3A DE102014110527A1 (de) | 2014-07-25 | 2014-07-25 | Verfahren zum Bestimmen von charakteristischen Bildpunkten für ein Kamerasystem eines Kraftfahrzeugs, Kamerasystem, Fahrassistenzsystem und Kraftfahrzeug |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2016012289A1 true WO2016012289A1 (fr) | 2016-01-28 |
Family
ID=53761330
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2015/065928 Ceased WO2016012289A1 (fr) | 2014-07-25 | 2015-07-13 | Procédé de détermination de pixels caractéristiques pour un système de caméra d'un véhicule à moteur, système de caméra, système d'assistance au conducteur et véhicule à moteur |
Country Status (2)
| Country | Link |
|---|---|
| DE (1) | DE102014110527A1 (fr) |
| WO (1) | WO2016012289A1 (fr) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8553081B2 (en) * | 2006-08-31 | 2013-10-08 | Alpine Electronics, Inc. | Apparatus and method for displaying an image of vehicle surroundings |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4233723B2 (ja) * | 2000-02-28 | 2009-03-04 | 本田技研工業株式会社 | 障害物検出装置、障害物検出方法、及び障害物検出プログラムを記録した記録媒体 |
| DE10066189B4 (de) * | 2000-05-18 | 2006-09-07 | Optigraf Ag Vaduz | Verfahren zur Erkennung von Objekten |
| US7231288B2 (en) * | 2005-03-15 | 2007-06-12 | Visteon Global Technologies, Inc. | System to determine distance to a lead vehicle |
| JP4988408B2 (ja) * | 2007-04-09 | 2012-08-01 | 株式会社デンソー | 画像認識装置 |
| DE102012002321B4 (de) * | 2012-02-06 | 2022-04-28 | Airbus Defence and Space GmbH | Verfahren zur Erkennung eines vorgegebenen Musters in einem Bilddatensatz |
-
2014
- 2014-07-25 DE DE102014110527.3A patent/DE102014110527A1/de active Pending
-
2015
- 2015-07-13 WO PCT/EP2015/065928 patent/WO2016012289A1/fr not_active Ceased
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8553081B2 (en) * | 2006-08-31 | 2013-10-08 | Alpine Electronics, Inc. | Apparatus and method for displaying an image of vehicle surroundings |
Non-Patent Citations (7)
| Title |
|---|
| ALBERT S HUANG ET AL: "Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera", INT. SYMPOSIUM ON ROBOTICS RESEARCH (ISRR), 28 August 2011 (2011-08-28), XP055133937, Retrieved from the Internet <URL:http://www.cs.washington.edu/robotics/projects/postscripts/Huang-ISRR-2011.pdf> [retrieved on 20140808] * |
| FLORE FAILLE: "Adapting Interest Point Detection to Illumination Conditions", 10 December 2003 (2003-12-10), XP055221996, Retrieved from the Internet <URL:http://www-prima.inrialpes.fr/perso/Tran/Draft/InterestPoint/Adapting-Corner-Detector-illumination.pdf> [retrieved on 20151019] * |
| FLORENTZ GASPARD ET AL: "SuperFAST: Model-based adaptive corner detection for scalable robotic vision", 2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IEEE, 3 July 2014 (2014-07-03), pages 1003 - 1010, XP032676732, DOI: 10.1109/IROS.2014.6942681 * |
| GASPARD FLORENTZ: "SuperFAST: Model-Based Adaptive Corner Detection for Scalable Robotic Vision", 3 July 2014 (2014-07-03), XP055220092, Retrieved from the Internet <URL:http://u2is.ensta-paristech.fr/seminaire.php?lang=en> [retrieved on 20151012] * |
| RAINER VOIGT ET AL: "Robust embedded egomotion estimation", INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2011 IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, 25 September 2011 (2011-09-25), pages 2694 - 2699, XP032201319, ISBN: 978-1-61284-454-1, DOI: 10.1109/IROS.2011.6095122 * |
| RUSSELL D ET AL: "A highly efficient block-based dynamic background model", PROCEEDINGS. IEEE CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, 2005. COMO, ITALY SEPT. 15-16, 2005, PISCATAWAY, NJ, USA,IEEE, PISCATAWAY, NJ, USA, 15 September 2005 (2005-09-15), pages 417 - 422, XP010881212, ISBN: 978-0-7803-9385-1, DOI: 10.1109/AVSS.2005.1577305 * |
| V. NANNEN; G. OLIVER: "Grid-Based Spatial Keypoint Selection for Real Time Visual Odometry", 2ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, BARCELONA, 2013 |
Also Published As
| Publication number | Publication date |
|---|---|
| DE102014110527A1 (de) | 2016-01-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5551595B2 (ja) | 滑走路監視システムおよび方法 | |
| US9789820B2 (en) | Object detection apparatus | |
| US9336574B2 (en) | Image super-resolution for dynamic rearview mirror | |
| JP5267596B2 (ja) | 移動体検出装置 | |
| KR101697512B1 (ko) | 영상 정합 장치 및 방법 | |
| EP3624578B1 (fr) | Système et procédé de raccordement automatique entre un tracteur et un outil | |
| KR101928391B1 (ko) | 다중분광 영상과 레이더 영상의 융합방법 및 장치 | |
| US11263758B2 (en) | Image processing method and apparatus | |
| JP2012118698A (ja) | 画像処理装置 | |
| KR101051459B1 (ko) | 영상의 에지를 추출하는 장치 및 방법 | |
| EP3163506A1 (fr) | Procédé de génération d'une carte stéréo avec de nouvelles résolutions optiques | |
| EP2610778A1 (fr) | Procédé pour la détection d'un obstacle et système d'aide au conducteur | |
| JP6188592B2 (ja) | 物体検出装置、物体検出方法、および物体検出プログラム | |
| US10687044B2 (en) | Method and arrangement for calibration of cameras | |
| CN106780550A (zh) | 一种目标跟踪方法及电子设备 | |
| KR20150101806A (ko) | 그리드 패턴의 자동인식을 이용한 어라운드뷰 모니터링 시스템 및 방법 | |
| KR101705558B1 (ko) | Avm 시스템의 공차 보정 장치 및 방법 | |
| US9615050B2 (en) | Topology preserving intensity binning on reduced resolution grid of adaptive weighted cells | |
| CN107950023A (zh) | 车辆用显示装置以及车辆用显示方法 | |
| JP2018503195A (ja) | 物体検出方法及び物体検出装置 | |
| US9928430B2 (en) | Dynamic stixel estimation using a single moving camera | |
| JPWO2018146997A1 (ja) | 立体物検出装置 | |
| JP4826355B2 (ja) | 車両周囲表示装置 | |
| WO2016012289A1 (fr) | Procédé de détermination de pixels caractéristiques pour un système de caméra d'un véhicule à moteur, système de caméra, système d'assistance au conducteur et véhicule à moteur | |
| US10242460B2 (en) | Imaging apparatus, car, and variation detection method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15744137 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 15744137 Country of ref document: EP Kind code of ref document: A1 |