[go: up one dir, main page]

WO2016012289A1 - Method for determining characteristic pixels for a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle - Google Patents

Method for determining characteristic pixels for a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle Download PDF

Info

Publication number
WO2016012289A1
WO2016012289A1 PCT/EP2015/065928 EP2015065928W WO2016012289A1 WO 2016012289 A1 WO2016012289 A1 WO 2016012289A1 EP 2015065928 W EP2015065928 W EP 2015065928W WO 2016012289 A1 WO2016012289 A1 WO 2016012289A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
characteristic pixels
camera
partial areas
motor vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/EP2015/065928
Other languages
French (fr)
Inventor
Sunil Chandra
Etienne PEROT
Petros Kapsalas
Ciáran HUGHES
Jonathan Horgan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Connaught Electronics Ltd
Original Assignee
Connaught Electronics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd filed Critical Connaught Electronics Ltd
Publication of WO2016012289A1 publication Critical patent/WO2016012289A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Definitions

  • the invention relates to a method for determining characteristic pixels for a camera system of a motor vehicle.
  • an image sequence of an environmental region of the motor vehicle which includes a temporal sequence of images, is captured by means of a camera of the camera system.
  • a predetermined number of partial images of one of the images of the image sequence is selected by means of an image processing device of the camera system.
  • the invention relates to a camera system for a motor vehicle, which is formed for performing such a method, to a driver assistance system with such a camera system as well as to a motor vehicle with such a driver assistance system.
  • characteristic pixels are determined by means of a detector of the camera system by means of an image processing device of the camera system.
  • the detector is adapted to detect characteristic or prominent pixels in the image and extract them from the image, respectively.
  • the detector is also known under the designation interest operator and aims at the extraction of certain high- frequency pixels or high saliency pixels of the image.
  • Known detectors are for example the Harris operator, the SIFT operator or the FAST operator. The enumerated operators all have the same target, namely the extraction of corners or salient features in the image.
  • a corner can for example be the point of intersection of two edges in the image, which preferably has a great contrast. These edges can for example be visualized by deriving the image in the form of a gradient image.
  • detectors such as the FAST operator which do not use gradient images.
  • the characteristic pixels are detected in the entire image. This may lead to the result that a distribution of the characteristic pixels in the image occurs very unevenly. In areas of the image with many corners and high contrast, many characteristic pixels are detected, while in areas of the image with few corners and low contrast, few characteristic pixels are detected.
  • a grid can be used, which divides the image into the partial areas.
  • the most intense characteristic pixels are selected from each of the partial areas. For example, this is known from the conference contribution " Grid-Based Spatial Keypoint Selection for Real Time Visual Odometry'Of V. Nannen and G. Oliver, 2nd International Conference on Pattern Recognition Applications and Methods, Barcelona, 2013. There, a method is shown, in which the characteristic pixels are detected depending on a grid. This grid divides the image into partial areas. The intensity of the characteristic pixels can be determined by a confidence value, which is returned by the detector for each
  • characteristic pixel The selection of the characteristic pixels is now effected such that a substantially identical number of the characteristic pixels is present for each partial area.
  • the disadvantage remains that these characteristic pixels, which are later available for the selection process, first of all have to be calculated. This means that computing time has to be expended for characteristic pixels, which do not find use in the further method anymore.
  • this object is solved by a method, by a camera system, by a driver assistance system as well as by a motor vehicle having the features according to the respective independent claims.
  • Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
  • a method for determining characteristic pixels for a camera system of a motor vehicle includes capturing an image sequence of an environmental region of the motor vehicle, which includes a temporal sequence of images, by means of a camera of the camera system, selecting a predetermined number of partial areas of one of the images of the image sequence by means of an image processing device of the camera system, wherein the following steps are performed for at least one of the partial areas: presetting a target value for a number of the characteristic pixels, determining the characteristic pixels of a first image of the image sequence by means of a detector of the camera system depending on a parameter of the detector, which describes a sensitivity of the detector in calculating the characteristic pixels, determining the number of the characteristic pixels, comparing the number of the characteristic pixels to the target value, adapting the parameter of the detector depending on the comparison and determining the characteristic pixels of a second image of the image sequence depending on the respectively adapted parameters.
  • the method serves for determining characteristic pixels or characteristic features in respective images of an image sequence.
  • the image sequence is provided by a camera of the camera system.
  • the camera is a video camera, which is able to provide a plurality of images (frames) per second.
  • the camera can be a CCD camera or a CMOS camera or any other suitable imaging device.
  • the camera can also be a thermal sensor such as a microbolometer which provides images in the infrared spectrum.
  • a first image is divided into a plurality of partial areas. For at least one of the partial areas, a target value for a number of the characteristic pixels is predetermined. Therein, a target value can also be preset for each partial area of the first image.
  • the determination is dependent on a parameter of the detector, which describes a sensitivity of the detector in calculating the characteristic pixels.
  • the sensitivity of the detector in particular describes how severely a characteristic pixel has to be marked in order to be detected as such. In this case, a severe markedness means in particular a great difference of the contrast of the image in connection with a corner or edge of the image.
  • the used parameter is dependent on the used algorithm of the detector, several parameters can also be used to adjust the sensitivity of the detector.
  • the number thereof is determined.
  • the number of the characteristic pixels can occur by counting these characteristic features in each partial area. After determining the number of the
  • the characteristic pixels this number is compared to the target value.
  • the parameter of the detector is adapted depending on the comparison.
  • the characteristic pixels are calculated for the same partial area of a second image of the image sequence depending on the adapted parameter.
  • the number of the characteristic pixels in the partial area of the second image corresponds to the target value or is closer to the target value than in the first image.
  • the method according to the invention it becomes possible to ensure a uniform distribution of the characteristic pixels across the entire image due to the partial areas. Due to the parameter of the detector adapted for each of the images of the image sequence, the calculation of the characteristic pixels can be restricted to that number, which is intended or desired. In other words, the method allows a spatially and temporally adapted determination or calculation of the characteristic pixels. In an embodiment, it is provided that the same target value for the number of the characteristic pixels is preset for each of the partial areas. This means that a uniform distribution of the characteristic pixels across the image can thereby be ensured.
  • the parameter of the detector is adapted such that the sensitivity of the detector is greater than a predetermined minimum value.
  • a predetermined minimum value is provided.
  • one or more parameters of the detector controlling the sensitivity of the detector can be formed such that the result of detection, namely the extraction of the characteristic pixels, provides a reasonable result. If the detector is adjusted very sensitive due to its parameters, this results in only image noise being detected. Interferences are understood by image noise, which do not have any relation to the actual scene content.
  • the advantage of the predetermined minimum value is therefore in that the image noise does not provide a contribution to the number of the characteristic pixels and the characteristic pixels can be reliably determined.
  • a region of interest is preset in the respective image and the partial areas are selected in the region of interest of the respective image.
  • the region of interest can for example be an area of the image, which has high information content.
  • parts of the motor vehicle such as for example a number plate are depicted in the image due to a characteristic of a lens of the camera, in particular a fish eye lens. These parts of the motor vehicle also do not contain important information and can be excluded from the region of interest.
  • it is advantageous in the selection of the region of interest that the characteristic pixels are determined exclusively in an area with high information content. This reduces the required computational effort for determining the characteristic pixels and at the same time reduces the error rate in determining the characteristic pixels.
  • the partial areas are selected by dividing the respective image by means of a grid.
  • a grid is put on the respective image and thus the individual partial areas are determined. This has the advantage that the position of the partial areas can be very precisely described. Furthermore, the partial areas can also be described by complicated geometric shapes of the grid.
  • the respective region of interest is divided into identically sized partial areas by the grid.
  • the region of interest can be divided such that all of the partial areas have the same dimensions.
  • the identically sized partial areas have the advantage that the computing time for calculating or determining the characteristic pixels is better predictable and can range substantially in the same order of magnitude for all of the partial areas.
  • the partial areas are selected depending on a field of view of the camera, which contains exclusively information about the environmental region.
  • this can mean that the intended region of interest contains more than only the environmental region of the motor vehicle.
  • the image has been transformed and the edge area of the image has been filled with the filling pixels.
  • the filling occurs for that the image is present in rectangular shape.
  • the rectangular shape in turn facilitates further processing the image. If exclusively partial areas of the field of view of the camera are now used, the computational effort can be reduced in calculating the characteristic pixels.
  • an image size of each of the partial areas is set depending on an angle of the respective partial area to an optical axis of the camera by means of the image processing device in selecting the partial areas.
  • This approach is advantageous because the image has certain biases or distortions depending on a lens of the camera. These distortions are usually the greater the farther one of the pixels is from the optical axis of the camera or the farther one of the pixels is to the edge of the image. These distortions can also be particularly severely marked if the lens includes a special lens, in particular a fish eye lens.
  • the image is transformed for subtracting out or removing the distortions. This transformation results in a different number of pixels being able to be present from the respective partial area. This is also dependent on the distance of the partial area to the optical axis of the camera.
  • a three- dimensional grid is transformed from a world coordinate system into a two-dimensional image coordinate system of the image.
  • the three-dimensional grid is imaginarily spanned in the space or in the environmental region of the motor vehicle and each partial area for example covers an identically sized area in the real world or in the world coordinate system.
  • geometric characteristics of the camera are taken into account and the partial areas in the two-dimensional image coordinate system or in the image each include the same number of pixels, which are also already encompassed by the respectively corresponding partial areas in the world coordinate system.
  • the determination of the characteristic pixels of the partial area by the image processing device is performed in an internal memory of a digital signal processing device of the image processing device.
  • the digital signal processing device includes an internal memory (on-chip memory). It is connected to an external memory of the image processing device by means of a bus.
  • the internal memory usually has a low storage volume compared to the external memory.
  • the access times by the digital signal processing device to the internal memory are shorter than to the external memory. This is because the bus connecting the external memory acts as a limiting factor on the data throughput.
  • the advantage by the partial areas is now that the entire partial area can each be shifted into the internal memory to determine the characteristic pixels.
  • the image includes such a data size, which does not allow shifting the entire image into the internal memory.
  • the entire image can be stored in the external memory, while the partial areas of the image are shifted into the internal memory for processing or for determining the characteristic pixels.
  • This approach is also referred to as a block-based memory transfer.
  • the grid and thereby the size of the partial areas can be set such that the memory size of the internal memory is exactly sufficient for each one of the partial areas.
  • the characteristic pixels are calculated with a corner detection method, in particular with a FAST algorithm.
  • the corner detection methods usually provide particularly prominent characteristic pixels.
  • the FAST algorithm (Features from Accelerated Segment Test) is known for the fact that the characteristic pixels can be particularly fast determined. This has the advantage that the characteristic pixels can be determined in real time in image sequences with a high frame rate.
  • the FAST algorithm does not lead to any discontinuities at the borders of the partial area.
  • the provided method is not restricted to corner detection methods or the FAST algorithm.
  • An edge detection method for example a Canny edge detector, can also be used.
  • a Harris detector or a Forstner detector can also be used to determine the characteristic pixels.
  • the present method can be performed with any detector for determining the characteristic pixels.
  • a camera system according to the invention for a motor vehicle includes at least one camera for providing a sequence of images of an environmental region of the motor vehicle and an image processing device adapted to perform the method according to the invention.
  • a driver assistance system includes a camera system according to the invention.
  • a motor vehicle according to the invention includes a driver assistance system according to the invention.
  • the driver assistance system is an electronic auxiliary device for assisting a driver in certain driving situations. Furthermore, the driver assistance system warns the driver during or shortly before critical traffic situations by a suitable human- machine interface.
  • FIG. 1 in schematic plan view a motor vehicle with a camera system including a camera and an image processing device;
  • Fig. 2 a flow diagram of a method according to an embodiment of the invention
  • Fig. 3 in schematic illustration an image of an environmental region of the motor vehicle provided by the camera, wherein partial areas are selected by means of a grid;
  • Fig. 4 in schematic illustration the motor vehicle in side view and a three- dimensional grid in a world coordinate system of the environmental region;
  • FIG. 5 in schematic illustration the image according to Fig. 3, wherein the
  • Fig. 6 in schematic illustration the image according to Fig. 3, wherein the partial areas are selected depending on a field of view of the camera;
  • Fig. 7 in schematic illustration a part of an image processing device, which
  • a plan view of a motor vehicle 1 with a camera system 2 is schematically illustrated.
  • the camera system 2 includes a camera 3 and an image processing device 4, which can for example be integrated in the camera 3.
  • this image processing device 4 can also be a component separate from the camera 3, which can be disposed in any position in the motor vehicle 1 .
  • the camera 3 is disposed at the rear of the motor vehicle 1 and captures an environmental region 5 behind the motor vehicle 1 .
  • an application with a front camera or a lateral camera or a camera at any other location on the motor vehicle 1 is also possible.
  • the camera 3 has a horizontal capturing angle a, which can for example have a horizontal opening range between 120°and 200° and a vertical capturing angle (not illustrated), which can for example extend from the surface of a road directly behind the motor vehicle 1 up to the horizon and beyond. These characteristics are for example allowed by a fish eye lens of the camera 3.
  • the camera 3 can be a CMOS camera or else a CCD camera or any image capturing device, by which characteristic pixels 18 in the environmental region 5 can be detected.
  • the camera 3 is a video camera, which continuously captures an image sequence or a sequence of images 6.
  • the image processing device 4 then processes the image sequence in real time and can determine the characteristic pixels 18 for each image 6 of the image sequence based on this image sequence.
  • the camera system 2 is for example a part of a driver assistance system or of an object recognition system, which monitors the environmental region 5 based on the detected characteristic pixels 18 and can warn a driver of the motor vehicle 1 of a collision with the output of a corresponding warning signal.
  • the camera system 2 can also be a part of a system, by which a posture or a position of the motor vehicle 1 can be determined. The position determination can also be effected based on the detected characteristic pixels 18 over the image sequence. The principle of odometry underlies this approach.
  • the position of the motor vehicle 1 which was originally provided by a global satellite navigation system, can be improved and/or more accurately determined with the aid of the camera system 2 if this global satellite navigation system is no longer available or only available in limited manner. For example, this can be the case in areas, which make reception of satellite signals impossible. A typical situation for this is passing through a tunnel.
  • Fig. 2 shows a flow diagram of the method according to the invention.
  • a predetermined number of partial areas 7 are selected from a first image 6.
  • the first image 6 is a part of an image sequence with temporally consecutive images 6.
  • An initialization is each performed for the partial areas 7.
  • the initialization includes determining a target value N des , which specifies a desired number of the characteristic pixels 18 for the respective partial area 7.
  • a loop for determining the characteristic pixels 18 is started for each partial area 7.
  • the characteristic pixels 18 in the respective partial area 7 are determined by means of a detector.
  • the detector is adapted to extract prominent pixels such as corners or edges in the partial area 7. The determination occurs depending on a parameter Th fM , which describes a sensitivity of the detector in calculating the
  • the characteristic pixels 18 acquired with the detector are output in the form of a list.
  • the list can for example be stored in a memory of the image processing device 4. This list can be provided for further processing, in particular for object recognition or for odometry, for each partial area 7 of the first image 6.
  • a step S5 follows to step S3, in which a number N dp of the characteristic pixels 18 is determined.
  • the characteristic pixels 18 determined with the detector are counted.
  • the list with the characteristic pixels can be evaluated by means of the image processing device 4.
  • the parameter Th fM is adapted.
  • a limit value factor Th fac is determined, based on which the parameter Th fM can be determined.
  • the limit value factor Th fac is determined depending on the number N dp of the detected characteristic pixels 18 and the target value N des . This can be mathematically described as follows:
  • Th fac dp des ⁇ (1 )
  • a step S6 the determined number N dp of the characteristic pixels 18 is compared to the target value N des .
  • the adaptation of the parameter Th fM is effected either in a step S7a or in a step S7b, according to whether the number N dp is above or below the target value N des .
  • the result of this comparison is used in calculating an intermediate value Th raw for the parameter Th fN . This can be determined according to the following formulas:
  • Th raw Thini - Thini * Th fac , if N dp > N des . (2) In the other case, it applies:
  • Th raw Thmi + Thini * Th fac , if N dp ⁇ N des . (3)
  • Th ini is an initialization value for the parameter Th fM , which is first used as a parameter for determining the characteristic pixels 18.
  • the initialization value Th ini is in particular used in determining the characteristic pixels 18 of the first image 6 of the image sequence that means the initialization value Th ini is in particular used in the first iteration of the algorithm.
  • the initialization value Th ini can result from a basic setting of the detector.
  • the intermediate value Th raw is decremented in step S7b.
  • the intermediate value Th raw is incremented in step S7a.
  • Whether the intermediate value Th raw is incremented or decremented depends on whether the parameter is adapted such that for the next image 6 of the image sequence presumably less characteristic pixels 18 are detected or whether the number N dp is below the target value N des and the parameter is adapted such that for the next image 6 of the image sequence presumably more characteristic pixels are detected.
  • Th raw is an intermediate value for the parameter Th fM of the detector for applying for the next image 6 of the image sequence.
  • the parameter Th fM for the next image 6 of the image sequence is calculated.
  • the parameter Th fM is determined depending on Th ini and Th raw .
  • the calculation of Th fM can be represented as follows:
  • Thfi (Wfii * Th ini ) + (1 - Wfii) * Th raw . (4a)
  • Equation 4a can be used in particular for the first iteration step.
  • the calculation of Th fM can be represented as follows:
  • Th fi (W fi , * Th,ii) + (1 - Wfii) * Th raw . (4b)
  • Wfii corresponds to a weighting factor, which performs weighting in the range from 0 to 1 .
  • This weighting factor W fN is used to reduce oscillations of the target value N des .
  • the method is not restricted to this described weighting factor W fi i, but all other weighting factors, which are identical in effect, can also be applied.
  • the parameter Th fN is determined such that the sensitivity of the detector is greater or equal to a predetermined minimum value Th min .
  • the minimum value Th min is required since the detector otherwise returns the characteristic pixels 18 in the manner, which correspond to image noise or an accidentally detected pixel. If the sensitivity of the detector is too low, such pixels are also detected as the characteristic pixels 18, which actually do not have this property. The characteristic pixel 18 can therefore no longer be uniquely associated in a subsequent image 6 of the image sequence. This can be described with the following formula:
  • Thfii Th min , if Thfi, ⁇ Thmin- (5)
  • Th max a maximum value Th max can also be determined with the same intent. If one of the detectors outputs less characteristic pixels 18 by increasing the parameter Th fM , thus the sensitivity of the detector is increased, the maximum value Th max can be used. The maximum value Th max is then applied as follows:
  • Thfi, Th max j if Th f ii > Th max- (6)
  • step S10 the previously determined parameter Th fN is now provided for the respective partial area 7, in order to use this partial area 7 for the next image 6 of the image sequence. Subsequently, the method is again continued in step S2.
  • step S3 proceeding detection of the characteristic pixels 18 in step S3 is performed for the respective partial area 7 of the next image 6 now based on the new parameter Th fM .
  • These new detected characteristic pixels 18 of the next image 6 are also - as already previously mentioned - provided for any further processing with step S4.
  • Fig. 3 shows an application of the camera system 2 according to the invention, in which the camera 3 is formed as a rear-view camera.
  • an image 6 of an image sequence which is output with the rear-view camera, is illustrated.
  • a region of interest 10 is selected.
  • This region of interest 10 extends to an area of the image 6, which exclusively contains information about the environmental region 5 of the motor vehicle 1 .
  • pixels also occur in the image 6, which do not include information about the environmental region 5 of the motor vehicle 1 .
  • filling pixels can be present in the image 6.
  • the filling pixels can for example have been added by a transformation of the image 6.
  • the image 6 shows areas of the motor vehicle 1 , such as for example a number plate.
  • the partial areas 7 are determined or selected by means of a grid 1 1 .
  • the partial areas 7 are of equal size, i.e. they cover an equally sized area in the image 6.
  • a three-dimensional grid 12 in a world coordinate system of the environmental region 5 determines the selection of the partial areas.
  • the three-dimensional grid 12 ensures a uniform coverage of the
  • Fig. 5 shows the grid 1 1 , which has arisen by a
  • Fig. 6 it is provided that not all of the respective partial areas 7 within the region of interest 10 are used, and thus the characteristic pixels 18 are determined not in all of the partial areas 7 of the region of interest 10.
  • the partial areas 7, which thus are not in a field of view of the camera 3, are not taken into account. This is performed because thereby the entire area of the image 6 can be covered by the respective partial areas 7, which exclusively contains information about the environmental region 5, and at the same time an advantageous shape of the partial areas 7 can be selected for the calculation of the characteristic pixels 18.
  • An advantageous shape for example exists if the partial areas 7 can be calculated by means of two rows and two columns of coordinates of the image 6.
  • deactivated partial areas 13 are shown, which differ from the partial areas 7 to the effect that none of the characteristic pixels 18 are determined in the deactivated partial areas 13.
  • Fig. 7 shows the image processing device 4 including a digital signal processing device 14 with an internal memory 15 and an external memory 16.
  • the external memory 16 is connected to the digital signal processing device 14 with a bus 17 and thus data can be transmitted from the external memory 16 to the internal memory 15 and vice versa.
  • the respective images 6 of the image sequence are stored in the external memory 16 after capture by the camera 3 and the respective partial area 7 is each completely transmitted to the internal memory 15 via the bus 17.
  • the digital signal processing device 14 can now perform the determination of the characteristic pixels 18 based on the partial area 7 stored in the internal memory 15. Subsequently, the transmission of the information about the characteristic pixels 18 back into the external memory 16 is effected. Now, the internal memory 15 is available for the next partial area 7.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for determining characteristic pixels (18) for a camera system (2) of a motor vehicle (1). An image sequence of an environmental region (5) of the motor vehicle (1) is captured, which includes a temporal sequence of images (6), by means of a camera (3) of the camera system (2). Furthermore, a predetermined number of partial areas (7) of one of the images of the image sequence is selected by means of an image processing device (4) of the camera system (2) (S1), wherein the following steps are performed for at least one of the partial areas (7): A target value (Ndes) for a number (Ndp) of the characteristic pixels (18) is preset. Furthermore, the characteristic pixels (18) of a first image (6) of the image sequence are determined by means of a detector of the camera system (2) depending on a parameter (Thfil) of the detector, which describes a sensitivity of the detector in calculating the characteristic pixels (18) (S3). Furthermore, the number (Ndp) of the characteristic pixels (18) is determined (S5). The number (Ndp) of the characteristic pixels (18) is compared to the target value (Ndes) (S6) and the parameter (Thfil) of the detector is adapted depending on the comparison (S7a, S7b, S8, S9). Finally, the characteristic pixels (18) of a second image of the image sequence are determined depending on the respective adapted parameters (Thfil).

Description

Method for determining characteristic pixels for a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle
The invention relates to a method for determining characteristic pixels for a camera system of a motor vehicle. Herein, an image sequence of an environmental region of the motor vehicle, which includes a temporal sequence of images, is captured by means of a camera of the camera system. In addition, a predetermined number of partial images of one of the images of the image sequence is selected by means of an image processing device of the camera system. In addition, the invention relates to a camera system for a motor vehicle, which is formed for performing such a method, to a driver assistance system with such a camera system as well as to a motor vehicle with such a driver assistance system.
Methods for determining characteristic image locations like characteristic pixels for a camera system of a motor vehicle are known from the prior art. Herein, for example in an image provided by a camera of the camera system, characteristic pixels are determined by means of a detector of the camera system by means of an image processing device of the camera system. The detector is adapted to detect characteristic or prominent pixels in the image and extract them from the image, respectively. The detector is also known under the designation interest operator and aims at the extraction of certain high- frequency pixels or high saliency pixels of the image. Known detectors are for example the Harris operator, the SIFT operator or the FAST operator. The enumerated operators all have the same target, namely the extraction of corners or salient features in the image. A corner can for example be the point of intersection of two edges in the image, which preferably has a great contrast. These edges can for example be visualized by deriving the image in the form of a gradient image. However, there are also detectors such as the FAST operator which do not use gradient images.
In the known detectors, the characteristic pixels are detected in the entire image. This may lead to the result that a distribution of the characteristic pixels in the image occurs very unevenly. In areas of the image with many corners and high contrast, many characteristic pixels are detected, while in areas of the image with few corners and low contrast, few characteristic pixels are detected.
In order to counteract this, a grid can be used, which divides the image into the partial areas. The most intense characteristic pixels are selected from each of the partial areas. For example, this is known from the conference contribution " Grid-Based Spatial Keypoint Selection for Real Time Visual Odometry'Of V. Nannen and G. Oliver, 2nd International Conference on Pattern Recognition Applications and Methods, Barcelona, 2013. There, a method is shown, in which the characteristic pixels are detected depending on a grid. This grid divides the image into partial areas. The intensity of the characteristic pixels can be determined by a confidence value, which is returned by the detector for each
characteristic pixel. The selection of the characteristic pixels is now effected such that a substantially identical number of the characteristic pixels is present for each partial area. However, the disadvantage remains that these characteristic pixels, which are later available for the selection process, first of all have to be calculated. This means that computing time has to be expended for characteristic pixels, which do not find use in the further method anymore.
It is the object of the invention to demonstrate a solution, how characteristic pixels in an image can be particularly effectively detected with a camera system of the initially mentioned kind.
According to the invention, this object is solved by a method, by a camera system, by a driver assistance system as well as by a motor vehicle having the features according to the respective independent claims. Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.
A method according to the invention for determining characteristic pixels for a camera system of a motor vehicle includes capturing an image sequence of an environmental region of the motor vehicle, which includes a temporal sequence of images, by means of a camera of the camera system, selecting a predetermined number of partial areas of one of the images of the image sequence by means of an image processing device of the camera system, wherein the following steps are performed for at least one of the partial areas: presetting a target value for a number of the characteristic pixels, determining the characteristic pixels of a first image of the image sequence by means of a detector of the camera system depending on a parameter of the detector, which describes a sensitivity of the detector in calculating the characteristic pixels, determining the number of the characteristic pixels, comparing the number of the characteristic pixels to the target value, adapting the parameter of the detector depending on the comparison and determining the characteristic pixels of a second image of the image sequence depending on the respectively adapted parameters. The method serves for determining characteristic pixels or characteristic features in respective images of an image sequence. The image sequence is provided by a camera of the camera system. Preferably, the camera is a video camera, which is able to provide a plurality of images (frames) per second. The camera can be a CCD camera or a CMOS camera or any other suitable imaging device. The camera can also be a thermal sensor such as a microbolometer which provides images in the infrared spectrum. Therein, a first image is divided into a plurality of partial areas. For at least one of the partial areas, a target value for a number of the characteristic pixels is predetermined. Therein, a target value can also be preset for each partial area of the first image. The determination is dependent on a parameter of the detector, which describes a sensitivity of the detector in calculating the characteristic pixels. The sensitivity of the detector in particular describes how severely a characteristic pixel has to be marked in order to be detected as such. In this case, a severe markedness means in particular a great difference of the contrast of the image in connection with a corner or edge of the image. The used parameter is dependent on the used algorithm of the detector, several parameters can also be used to adjust the sensitivity of the detector.
After extracting the characteristic pixels with the detector, the number thereof is determined. The number of the characteristic pixels can occur by counting these characteristic features in each partial area. After determining the number of the
characteristic pixels, this number is compared to the target value. As a result of the comparison, the parameter of the detector is adapted depending on the comparison. Now, the characteristic pixels are calculated for the same partial area of a second image of the image sequence depending on the adapted parameter. Thus, it can for example be achieved that the number of the characteristic pixels in the partial area of the second image corresponds to the target value or is closer to the target value than in the first image.
By the method according to the invention, it becomes possible to ensure a uniform distribution of the characteristic pixels across the entire image due to the partial areas. Due to the parameter of the detector adapted for each of the images of the image sequence, the calculation of the characteristic pixels can be restricted to that number, which is intended or desired. In other words, the method allows a spatially and temporally adapted determination or calculation of the characteristic pixels. In an embodiment, it is provided that the same target value for the number of the characteristic pixels is preset for each of the partial areas. This means that a uniform distribution of the characteristic pixels across the image can thereby be ensured. In this manner, in the extraction of characteristic pixels in the temporally consecutive images of the image sequence, substantially the same number of characteristic pixels for each of the partial images results no matter whether the partial images contain high-frequency areas or low-frequency areas. Presently, it is in particular to be understood by high-frequency areas that the image is heterogeneous, thus has many transitions from high intensity values to low intensity values. In low-frequency areas or a homogeneous image, the image predominantly has low frequencies and substantially is composed of similar intensity values.
In particular, it is provided that the parameter of the detector is adapted such that the sensitivity of the detector is greater than a predetermined minimum value. This means that one or more parameters of the detector controlling the sensitivity of the detector can be formed such that the result of detection, namely the extraction of the characteristic pixels, provides a reasonable result. If the detector is adjusted very sensitive due to its parameters, this results in only image noise being detected. Interferences are understood by image noise, which do not have any relation to the actual scene content. The advantage of the predetermined minimum value is therefore in that the image noise does not provide a contribution to the number of the characteristic pixels and the characteristic pixels can be reliably determined.
In a further configuration, it is provided that a region of interest is preset in the respective image and the partial areas are selected in the region of interest of the respective image. The region of interest (ROI) can for example be an area of the image, which has high information content. Furthermore, it is possible that in addition to the environment, parts of the motor vehicle such as for example a number plate are depicted in the image due to a characteristic of a lens of the camera, in particular a fish eye lens. These parts of the motor vehicle also do not contain important information and can be excluded from the region of interest. Thus, it is advantageous in the selection of the region of interest that the characteristic pixels are determined exclusively in an area with high information content. This reduces the required computational effort for determining the characteristic pixels and at the same time reduces the error rate in determining the characteristic pixels.
Preferably, the partial areas are selected by dividing the respective image by means of a grid. In other words, a grid is put on the respective image and thus the individual partial areas are determined. This has the advantage that the position of the partial areas can be very precisely described. Furthermore, the partial areas can also be described by complicated geometric shapes of the grid.
Furthermore, it is preferably provided that the respective region of interest is divided into identically sized partial areas by the grid. The region of interest can be divided such that all of the partial areas have the same dimensions. The identically sized partial areas have the advantage that the computing time for calculating or determining the characteristic pixels is better predictable and can range substantially in the same order of magnitude for all of the partial areas.
Similarly, it is provided that the partial areas are selected depending on a field of view of the camera, which contains exclusively information about the environmental region. For example, this can mean that the intended region of interest contains more than only the environmental region of the motor vehicle. For example, this is the case if the image has been transformed and the edge area of the image has been filled with the filling pixels. The filling occurs for that the image is present in rectangular shape. The rectangular shape in turn facilitates further processing the image. If exclusively partial areas of the field of view of the camera are now used, the computational effort can be reduced in calculating the characteristic pixels.
Furthermore, it is advantageous if an image size of each of the partial areas is set depending on an angle of the respective partial area to an optical axis of the camera by means of the image processing device in selecting the partial areas. This approach is advantageous because the image has certain biases or distortions depending on a lens of the camera. These distortions are usually the greater the farther one of the pixels is from the optical axis of the camera or the farther one of the pixels is to the edge of the image. These distortions can also be particularly severely marked if the lens includes a special lens, in particular a fish eye lens. Furthermore, the image is transformed for subtracting out or removing the distortions. This transformation results in a different number of pixels being able to be present from the respective partial area. This is also dependent on the distance of the partial area to the optical axis of the camera.
In a further form of configuration, for determining each of the partial areas, a three- dimensional grid is transformed from a world coordinate system into a two-dimensional image coordinate system of the image. This means that the three-dimensional grid is imaginarily spanned in the space or in the environmental region of the motor vehicle and each partial area for example covers an identically sized area in the real world or in the world coordinate system. In the transformation from the world coordinate system into the two-dimensional image coordinate system, now, geometric characteristics of the camera are taken into account and the partial areas in the two-dimensional image coordinate system or in the image each include the same number of pixels, which are also already encompassed by the respectively corresponding partial areas in the world coordinate system.
In a configuration, it is provided that the determination of the characteristic pixels of the partial area by the image processing device is performed in an internal memory of a digital signal processing device of the image processing device. Usually, the digital signal processing device (DSP) includes an internal memory (on-chip memory). It is connected to an external memory of the image processing device by means of a bus. The internal memory usually has a low storage volume compared to the external memory. However, the access times by the digital signal processing device to the internal memory are shorter than to the external memory. This is because the bus connecting the external memory acts as a limiting factor on the data throughput. The advantage by the partial areas is now that the entire partial area can each be shifted into the internal memory to determine the characteristic pixels. This also functions if the image includes such a data size, which does not allow shifting the entire image into the internal memory. Thus, the entire image can be stored in the external memory, while the partial areas of the image are shifted into the internal memory for processing or for determining the characteristic pixels. This approach is also referred to as a block-based memory transfer. On condition that the memory size of the internal memory is known, now, the grid and thereby the size of the partial areas can be set such that the memory size of the internal memory is exactly sufficient for each one of the partial areas.
Furthermore, it is preferably provided that the characteristic pixels are calculated with a corner detection method, in particular with a FAST algorithm. The corner detection methods usually provide particularly prominent characteristic pixels. Moreover, the FAST algorithm (Features from Accelerated Segment Test) is known for the fact that the characteristic pixels can be particularly fast determined. This has the advantage that the characteristic pixels can be determined in real time in image sequences with a high frame rate. Furthermore, the FAST algorithm does not lead to any discontinuities at the borders of the partial area. However, the provided method is not restricted to corner detection methods or the FAST algorithm. An edge detection method, for example a Canny edge detector, can also be used. Furthermore, for example, a Harris detector or a Forstner detector can also be used to determine the characteristic pixels. Generally speaking, the present method can be performed with any detector for determining the characteristic pixels.
A camera system according to the invention for a motor vehicle includes at least one camera for providing a sequence of images of an environmental region of the motor vehicle and an image processing device adapted to perform the method according to the invention.
A driver assistance system according to the invention includes a camera system according to the invention.
A motor vehicle according to the invention includes a driver assistance system according to the invention. The driver assistance system is an electronic auxiliary device for assisting a driver in certain driving situations. Furthermore, the driver assistance system warns the driver during or shortly before critical traffic situations by a suitable human- machine interface.
The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the camera system according to the invention, to the driver assistance system according to the invention as well as to the motor vehicle according to the invention.
Further features of the invention are apparent from the claims, the figures and the description of figures. All of the features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone.
Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.
There show: Fig. 1 in schematic plan view a motor vehicle with a camera system including a camera and an image processing device;
Fig. 2 a flow diagram of a method according to an embodiment of the invention;
Fig. 3 in schematic illustration an image of an environmental region of the motor vehicle provided by the camera, wherein partial areas are selected by means of a grid;
Fig. 4 in schematic illustration the motor vehicle in side view and a three- dimensional grid in a world coordinate system of the environmental region;
Fig. 5 in schematic illustration the image according to Fig. 3, wherein the
respective partial areas have a different size;
Fig. 6 in schematic illustration the image according to Fig. 3, wherein the partial areas are selected depending on a field of view of the camera; and
Fig. 7 in schematic illustration a part of an image processing device, which
includes a digital signal processing device with an internal memory.
In Fig. 1 , a plan view of a motor vehicle 1 with a camera system 2 according to an embodiment of the invention is schematically illustrated. The camera system 2 includes a camera 3 and an image processing device 4, which can for example be integrated in the camera 3. However, this image processing device 4 can also be a component separate from the camera 3, which can be disposed in any position in the motor vehicle 1 . In the embodiment, the camera 3 is disposed at the rear of the motor vehicle 1 and captures an environmental region 5 behind the motor vehicle 1 . However, an application with a front camera or a lateral camera or a camera at any other location on the motor vehicle 1 is also possible.
The camera 3 has a horizontal capturing angle a, which can for example have a horizontal opening range between 120°and 200° and a vertical capturing angle (not illustrated), which can for example extend from the surface of a road directly behind the motor vehicle 1 up to the horizon and beyond. These characteristics are for example allowed by a fish eye lens of the camera 3. The camera 3 can be a CMOS camera or else a CCD camera or any image capturing device, by which characteristic pixels 18 in the environmental region 5 can be detected. The camera 3 is a video camera, which continuously captures an image sequence or a sequence of images 6. The image processing device 4 then processes the image sequence in real time and can determine the characteristic pixels 18 for each image 6 of the image sequence based on this image sequence.
The camera system 2 is for example a part of a driver assistance system or of an object recognition system, which monitors the environmental region 5 based on the detected characteristic pixels 18 and can warn a driver of the motor vehicle 1 of a collision with the output of a corresponding warning signal. However, the camera system 2 can also be a part of a system, by which a posture or a position of the motor vehicle 1 can be determined. The position determination can also be effected based on the detected characteristic pixels 18 over the image sequence. The principle of odometry underlies this approach. Thus, for example, the position of the motor vehicle 1 , which was originally provided by a global satellite navigation system, can be improved and/or more accurately determined with the aid of the camera system 2 if this global satellite navigation system is no longer available or only available in limited manner. For example, this can be the case in areas, which make reception of satellite signals impossible. A typical situation for this is passing through a tunnel.
Fig. 2 shows a flow diagram of the method according to the invention. In a first step S1 , a predetermined number of partial areas 7 are selected from a first image 6. The first image 6 is a part of an image sequence with temporally consecutive images 6. An initialization is each performed for the partial areas 7. The initialization includes determining a target value Ndes, which specifies a desired number of the characteristic pixels 18 for the respective partial area 7. In step S2, a loop for determining the characteristic pixels 18 is started for each partial area 7.
In a further step S3, the characteristic pixels 18 in the respective partial area 7 are determined by means of a detector. The detector is adapted to extract prominent pixels such as corners or edges in the partial area 7. The determination occurs depending on a parameter ThfM, which describes a sensitivity of the detector in calculating the
characteristic pixels 18. Several parameters ThfM can also be adapted if the used detector controls the sensitivity via several parameters ThfM. In a further step S4, the characteristic pixels 18 acquired with the detector are output in the form of a list. The list can for example be stored in a memory of the image processing device 4. This list can be provided for further processing, in particular for object recognition or for odometry, for each partial area 7 of the first image 6. A step S5 follows to step S3, in which a number Ndp of the characteristic pixels 18 is determined. Herein, the characteristic pixels 18 determined with the detector are counted. For this purpose, the list with the characteristic pixels can be evaluated by means of the image processing device 4.
Depending on the number Ndp of the characteristic pixels 18 and the target value Ndes, the parameter ThfM is adapted. To this, first, a limit value factor Thfac is determined, based on which the parameter ThfM can be determined. The limit value factor Thfac is determined depending on the number Ndp of the detected characteristic pixels 18 and the target value Ndes. This can be mathematically described as follows:
N - N
Thfac = dp des ■ (1 )
1 dp
In a step S6, the determined number Ndp of the characteristic pixels 18 is compared to the target value Ndes. The adaptation of the parameter ThfM is effected either in a step S7a or in a step S7b, according to whether the number Ndp is above or below the target value Ndes. Now, it is examined whether the number Ndp of the detected characteristic pixels 18 is less or greater than the target value Ndes. The result of this comparison is used in calculating an intermediate value Thraw for the parameter ThfN. This can be determined according to the following formulas:
Thraw = Thini - Thini * Thfac, if Ndp > Ndes. (2) In the other case, it applies:
Thraw = Thmi + Thini * Thfac, if Ndp < Ndes. (3)
In this case, Thini is an initialization value for the parameter ThfM, which is first used as a parameter for determining the characteristic pixels 18. The initialization value Thini is in particular used in determining the characteristic pixels 18 of the first image 6 of the image sequence that means the initialization value Thini is in particular used in the first iteration of the algorithm. The initialization value Thini can result from a basic setting of the detector. In the present embodiment, according to the above mentioned formula (2), the intermediate value Thraw is decremented in step S7b. Alternatively to this, according to the above mentioned formula (3), the intermediate value Thraw is incremented in step S7a. Whether the intermediate value Thraw is incremented or decremented depends on whether the parameter is adapted such that for the next image 6 of the image sequence presumably less characteristic pixels 18 are detected or whether the number Ndp is below the target value Ndes and the parameter is adapted such that for the next image 6 of the image sequence presumably more characteristic pixels are detected.
In a further step S8, the intermediate value Thraw is provided. Thraw is an intermediate value for the parameter ThfM of the detector for applying for the next image 6 of the image sequence. Furthermore, in a step S9, the parameter ThfM for the next image 6 of the image sequence is calculated. In this case, the parameter ThfM is determined depending on Thini and Thraw. The calculation of ThfM can be represented as follows:
Thfi, = (Wfii * Thini) + (1 - Wfii) * Thraw. (4a)
Equation 4a can be used in particular for the first iteration step. For a further iteration step, the calculation of ThfM can be represented as follows:
Thfi, = (Wfi, * Th,ii) + (1 - Wfii) * Thraw. (4b)
Wfii corresponds to a weighting factor, which performs weighting in the range from 0 to 1 . This weighting factor WfN is used to reduce oscillations of the target value Ndes. The method is not restricted to this described weighting factor Wfii, but all other weighting factors, which are identical in effect, can also be applied.
In addition, the parameter ThfN is determined such that the sensitivity of the detector is greater or equal to a predetermined minimum value Thmin. The minimum value Thmin is required since the detector otherwise returns the characteristic pixels 18 in the manner, which correspond to image noise or an accidentally detected pixel. If the sensitivity of the detector is too low, such pixels are also detected as the characteristic pixels 18, which actually do not have this property. The characteristic pixel 18 can therefore no longer be uniquely associated in a subsequent image 6 of the image sequence. This can be described with the following formula:
Thfii = Thmin, if Thfi, < Thmin- (5) For certain detectors, a maximum value Thmax can also be determined with the same intent. If one of the detectors outputs less characteristic pixels 18 by increasing the parameter ThfM, thus the sensitivity of the detector is increased, the maximum value Thmax can be used. The maximum value Thmax is then applied as follows:
Thfi, = Th max j if Thfii > Th max- (6)
In a further step S10, the previously determined parameter ThfN is now provided for the respective partial area 7, in order to use this partial area 7 for the next image 6 of the image sequence. Subsequently, the method is again continued in step S2. The
proceeding detection of the characteristic pixels 18 in step S3 is performed for the respective partial area 7 of the next image 6 now based on the new parameter ThfM. These new detected characteristic pixels 18 of the next image 6 are also - as already previously mentioned - provided for any further processing with step S4.
Fig. 3 shows an application of the camera system 2 according to the invention, in which the camera 3 is formed as a rear-view camera. Here, an image 6 of an image sequence, which is output with the rear-view camera, is illustrated. In the image 6, a region of interest 10 is selected. This region of interest 10 extends to an area of the image 6, which exclusively contains information about the environmental region 5 of the motor vehicle 1 . As is apparent, pixels also occur in the image 6, which do not include information about the environmental region 5 of the motor vehicle 1 . For example, filling pixels can be present in the image 6. The filling pixels can for example have been added by a transformation of the image 6. In addition, the image 6 shows areas of the motor vehicle 1 , such as for example a number plate. These pixels are not taken into account in selecting the region of interest. In the region of interest 10, the partial areas 7 are determined or selected by means of a grid 1 1 . In the present case according to Fig. 3, the partial areas 7 are of equal size, i.e. they cover an equally sized area in the image 6.
However, it can also be the case that the partial areas 7 do not cover the same area in the image 6. Thus, for example, in Fig. 4, it is shown how a three-dimensional grid 12 in a world coordinate system of the environmental region 5 determines the selection of the partial areas. The three-dimensional grid 12 ensures a uniform coverage of the
environmental region 5 by means of the partial areas 7. This is exemplified in synopsis with Fig. 4 and Fig. 5, wherein Fig. 5 shows the grid 1 1 , which has arisen by a
transformation of the grid 12 from the world coordinate system into the image coordinate system. It is clearly seen that by the two-dimensional representation of the environmental region 5, a corresponding imaging characteristic is also taken into account. Thus, the partial areas 7, which are close to the optical axis of the camera, were barely changed by the transformation, but the partial areas 7 at the edge of the shown region of interest 10, thus farther away from the optical axis of the camera 3, are also considerably deformed.
According to Fig. 6, it is provided that not all of the respective partial areas 7 within the region of interest 10 are used, and thus the characteristic pixels 18 are determined not in all of the partial areas 7 of the region of interest 10. The partial areas 7, which thus are not in a field of view of the camera 3, are not taken into account. This is performed because thereby the entire area of the image 6 can be covered by the respective partial areas 7, which exclusively contains information about the environmental region 5, and at the same time an advantageous shape of the partial areas 7 can be selected for the calculation of the characteristic pixels 18. An advantageous shape for example exists if the partial areas 7 can be calculated by means of two rows and two columns of coordinates of the image 6. In Fig. 6, thus, deactivated partial areas 13 are shown, which differ from the partial areas 7 to the effect that none of the characteristic pixels 18 are determined in the deactivated partial areas 13.
Fig. 7 shows the image processing device 4 including a digital signal processing device 14 with an internal memory 15 and an external memory 16. The external memory 16 is connected to the digital signal processing device 14 with a bus 17 and thus data can be transmitted from the external memory 16 to the internal memory 15 and vice versa. Now, it is provided that the respective images 6 of the image sequence are stored in the external memory 16 after capture by the camera 3 and the respective partial area 7 is each completely transmitted to the internal memory 15 via the bus 17. The digital signal processing device 14 can now perform the determination of the characteristic pixels 18 based on the partial area 7 stored in the internal memory 15. Subsequently, the transmission of the information about the characteristic pixels 18 back into the external memory 16 is effected. Now, the internal memory 15 is available for the next partial area 7.

Claims

Claims
1 . Method for determining characteristic pixels (18) for a camera system (2) of a motor vehicle (1 ), including the steps of
- capturing an image sequence of an environmental region (5) of the motor
vehicle (1 ), which includes a temporal sequence of images (6), by means of a camera (3) of the camera system (2),
- selecting a predetermined number of partial areas (7) of one of the images (6) of the image sequence by means of an image processing device (4) of the camera system (1 ) (S1 ),
characterized in that
the following steps are performed for at least one of the partial areas (7) :
- presetting a target value (Ndes) for a number (Ndp) of the characteristic pixels (18),
- determining the characteristic pixels (1 8) of a first image of the image sequence by means of a detector of the camera system (2) depending on a parameter (Thfii) of the detector, which describes a sensitivity of the detector in calculating the characteristic pixels (18) (S3),
- determining the number (Ndp) of the characteristic pixels (1 8) (S5),
- comparing the number (Ndp) of the characteristic pixels (18) to the target value (Ndes) (S6),
- adapting the parameter (ThfN) of the detector depending on the comparison
(S7a, S7b, S8, S9), and
- determining the characteristic pixels (1 8) of a second image of the image
sequence depending on the respective adapted parameters (ThfM).
2. Method according to claim 1 ,
characterized in that
for each of the partial areas (7), the same target value (Ndes) is preset for the number (Ndp) of the characteristic pixels (18).
3. Method according to claim 1 or 2,
characterized in that
the parameter (ThfM) of the detector is adapted such that the sensitivity of the detector is greater than a predetermined minimum value (Thmin).
4. Method according to the preceding claims,
characterized in that
a region of interest (10) is preset in the respective image (6) and the partial areas (7) are selected in the region of interest (10) of the respective image (6).
5. Method according to the preceding claims,
characterized in that
the partial areas (7) are selected by dividing the respective image (6) by means of a grid (1 1 ).
6. Method according to claim 5,
characterized in that
the respective region of interest (10) is divided into identically sized partial areas (7) by the grid (1 1 ).
7. Method according to the preceding claims,
characterized in that
the partial areas (7) are selected depending on a field of view of the camera (3), which exclusively contains information about the environmental region (5).
8. Method according to the preceding claims,
characterized in that
in selecting the partial areas (7), an image size of each of the partial areas (7) is set depending on an angle of the respective partial area (7) to an optical axis of the camera (3) by means of the image processing device (4).
9. Method according to claim 8,
characterized in that
for setting the image size of each of the partial areas (7), a three-dimensional grid is transformed from a world coordinate system into a two-dimensional image coordinate system of the image (6).
10. Method according to the preceding claims,
characterized in that
the determination of the characteristic pixels (18) of the partial area (7) by the image processing device (4) is performed in an internal memory of a digital signal processing device (14) of the image processing device (4).
1 1 . Method according to the preceding claims,
characterized in that
the characteristic pixels (18) are calculated with a corner detection method, in particular with a FAST algorithm.
12. Camera system (2) for a motor vehicle (1 ), including a camera (3) for capturing an image sequence of an environmental region (5) of the motor vehicle (1 ), and including an image processing device (4), which is adapted to calculate
characteristic pixels (18) in at least one image (6) of the image sequence, characterized in that
the camera system (2) is adapted to perform a method according to any one of the preceding claims.
13. Driver assistance system with a camera system (2) according to claim 12.
14. Motor vehicle (1 ) with a driver assistance system according to claim 13.
PCT/EP2015/065928 2014-07-25 2015-07-13 Method for determining characteristic pixels for a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle Ceased WO2016012289A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014110527.3 2014-07-25
DE102014110527.3A DE102014110527A1 (en) 2014-07-25 2014-07-25 Method for determining characteristic pixels for a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle

Publications (1)

Publication Number Publication Date
WO2016012289A1 true WO2016012289A1 (en) 2016-01-28

Family

ID=53761330

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/065928 Ceased WO2016012289A1 (en) 2014-07-25 2015-07-13 Method for determining characteristic pixels for a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle

Country Status (2)

Country Link
DE (1) DE102014110527A1 (en)
WO (1) WO2016012289A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8553081B2 (en) * 2006-08-31 2013-10-08 Alpine Electronics, Inc. Apparatus and method for displaying an image of vehicle surroundings

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4233723B2 (en) * 2000-02-28 2009-03-04 本田技研工業株式会社 Obstacle detection device, obstacle detection method, and recording medium recording an obstacle detection program
DE10066189B4 (en) * 2000-05-18 2006-09-07 Optigraf Ag Vaduz Detecting stationary or moving objects such as images or texts by selecting search section within search window
US7231288B2 (en) * 2005-03-15 2007-06-12 Visteon Global Technologies, Inc. System to determine distance to a lead vehicle
JP4988408B2 (en) * 2007-04-09 2012-08-01 株式会社デンソー Image recognition device
DE102012002321B4 (en) * 2012-02-06 2022-04-28 Airbus Defence and Space GmbH Method for recognizing a given pattern in an image data set

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8553081B2 (en) * 2006-08-31 2013-10-08 Alpine Electronics, Inc. Apparatus and method for displaying an image of vehicle surroundings

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ALBERT S HUANG ET AL: "Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera", INT. SYMPOSIUM ON ROBOTICS RESEARCH (ISRR), 28 August 2011 (2011-08-28), XP055133937, Retrieved from the Internet <URL:http://www.cs.washington.edu/robotics/projects/postscripts/Huang-ISRR-2011.pdf> [retrieved on 20140808] *
FLORE FAILLE: "Adapting Interest Point Detection to Illumination Conditions", 10 December 2003 (2003-12-10), XP055221996, Retrieved from the Internet <URL:http://www-prima.inrialpes.fr/perso/Tran/Draft/InterestPoint/Adapting-Corner-Detector-illumination.pdf> [retrieved on 20151019] *
FLORENTZ GASPARD ET AL: "SuperFAST: Model-based adaptive corner detection for scalable robotic vision", 2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IEEE, 3 July 2014 (2014-07-03), pages 1003 - 1010, XP032676732, DOI: 10.1109/IROS.2014.6942681 *
GASPARD FLORENTZ: "SuperFAST: Model-Based Adaptive Corner Detection for Scalable Robotic Vision", 3 July 2014 (2014-07-03), XP055220092, Retrieved from the Internet <URL:http://u2is.ensta-paristech.fr/seminaire.php?lang=en> [retrieved on 20151012] *
RAINER VOIGT ET AL: "Robust embedded egomotion estimation", INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2011 IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, 25 September 2011 (2011-09-25), pages 2694 - 2699, XP032201319, ISBN: 978-1-61284-454-1, DOI: 10.1109/IROS.2011.6095122 *
RUSSELL D ET AL: "A highly efficient block-based dynamic background model", PROCEEDINGS. IEEE CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, 2005. COMO, ITALY SEPT. 15-16, 2005, PISCATAWAY, NJ, USA,IEEE, PISCATAWAY, NJ, USA, 15 September 2005 (2005-09-15), pages 417 - 422, XP010881212, ISBN: 978-0-7803-9385-1, DOI: 10.1109/AVSS.2005.1577305 *
V. NANNEN; G. OLIVER: "Grid-Based Spatial Keypoint Selection for Real Time Visual Odometry", 2ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, BARCELONA, 2013

Also Published As

Publication number Publication date
DE102014110527A1 (en) 2016-01-28

Similar Documents

Publication Publication Date Title
JP5551595B2 (en) Runway monitoring system and method
US9789820B2 (en) Object detection apparatus
US9336574B2 (en) Image super-resolution for dynamic rearview mirror
JP5267596B2 (en) Moving body detection device
KR101697512B1 (en) Image registration device and method thereof
EP3624578B1 (en) System and method for automatic connection between a tractor and an implement
KR101928391B1 (en) Method and apparatus for data fusion of multi spectral image and radar image
US11263758B2 (en) Image processing method and apparatus
JP2012118698A (en) Image processing system
KR101051459B1 (en) Apparatus and method for extracting edges of an image
EP3163506A1 (en) Method for stereo map generation with novel optical resolutions
EP2610778A1 (en) Method of detecting an obstacle and driver assist system
JP6188592B2 (en) Object detection apparatus, object detection method, and object detection program
US10687044B2 (en) Method and arrangement for calibration of cameras
CN106780550A (en) A kind of method for tracking target and electronic equipment
KR20150101806A (en) System and Method for Monitoring Around View using the Grid Pattern Automatic recognition
KR101705558B1 (en) Top view creating method for camera installed on vehicle and AVM system
US9615050B2 (en) Topology preserving intensity binning on reduced resolution grid of adaptive weighted cells
CN107950023A (en) Display apparatus and vehicle display methods
JP2018503195A (en) Object detection method and object detection apparatus
US9928430B2 (en) Dynamic stixel estimation using a single moving camera
JPWO2018146997A1 (en) Three-dimensional object detection device
JP4826355B2 (en) Vehicle surrounding display device
WO2016012289A1 (en) Method for determining characteristic pixels for a camera system of a motor vehicle, camera system, driver assistance system and motor vehicle
US10242460B2 (en) Imaging apparatus, car, and variation detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15744137

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15744137

Country of ref document: EP

Kind code of ref document: A1