JP2009265891A - Obstacle recognition device - Google Patents
Obstacle recognition device Download PDFInfo
- Publication number
- JP2009265891A JP2009265891A JP2008113922A JP2008113922A JP2009265891A JP 2009265891 A JP2009265891 A JP 2009265891A JP 2008113922 A JP2008113922 A JP 2008113922A JP 2008113922 A JP2008113922 A JP 2008113922A JP 2009265891 A JP2009265891 A JP 2009265891A
- Authority
- JP
- Japan
- Prior art keywords
- image
- obstacle
- vehicle
- road surface
- expansion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 claims abstract description 29
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000009466 transformation Effects 0.000 claims description 26
- 230000008569 process Effects 0.000 claims description 24
- 230000008602 contraction Effects 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims 1
- 239000002131 composite material Substances 0.000 description 17
- 238000012544 monitoring process Methods 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 240000004050 Pentaglottis sempervirens Species 0.000 description 3
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000004069 differentiation Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
この発明は、撮影画像を処理して自車の走行路上の障害物を認識する障害物認識装置に関する。 The present invention relates to an obstacle recognition apparatus that processes a captured image and recognizes an obstacle on a traveling path of a host vehicle.
従来、車両のプリクラッシュセーフティ(被害軽減や衝突回避等)を実現する場合、走行中の自車の前方や後方の障害物をどのようにして認識するかが重要である。 Conventionally, when realizing pre-crash safety (reduction of damage, collision avoidance, etc.) of a vehicle, it is important how to recognize obstacles in front of and behind the traveling vehicle.
そして、前記障害物を認識する装置として、従来、自車に搭載したカメラにより一方向(例えば自車前方)を一定時間毎に撮影し、前後する2時刻の撮影画像について射影変換を施した後に差分をとり、その差分から時間ずれが生じた特徴点を検出し、これらの特徴点から動きベクトルのオプティカルフローを検出して自車前方の車両等の障害物(路面垂直物)を認識する装置が提案されている(例えば、特許文献1参照)。
前記特許文献1に記載の従来装置の場合、前後する2時刻の撮影画像の射影変換後の差分画像から前記動きベクトルのオプティカルフローを検出することで、道路標示等を誤認識することなく路上の障害物を検出している。この場合、自車の前方等の一方向からみた撮影画像を用いて、その画像の先行車等の障害物と自車との距離や形状の時間変化を伴う僅かな差から障害物を認識する構成であるため、前記オプティカルフローを検出する複雑な画像処理が必要であるだけでなく、障害物の認識が容易でなく、状況によっては障害物を認識できないおそれがあり、安定して確実に障害物を認識することができない問題がある。 In the case of the conventional device described in Patent Document 1, by detecting the optical flow of the motion vector from the difference image after the projective transformation of the captured images at two times before and after, the road marking or the like can be detected without erroneous recognition. An obstacle is detected. In this case, using a photographed image viewed from one direction such as in front of the own vehicle, the obstacle is recognized from a slight difference accompanying a change in the distance and shape of the vehicle such as the preceding vehicle and the preceding vehicle in the image. Because of this configuration, not only complicated image processing is required to detect the optical flow, but also obstacles are not easily recognized, and obstacles may not be recognized depending on the situation. There is a problem that things cannot be recognized.
そこで、本願の出願人は、走行する自車の複数方向からみた路面撮影が可能な撮影手段と、略同じ路面領域についての前記撮影手段の異なる複数方向からの撮影画像を取得する画像取得手段と、前記画像取得手段が取得した複数方向からの撮影画像に含まれた所定の監視範囲を抽出して同定する同定手段と、前記同定手段の同定結果により、前記監視範囲の撮影画像間で異なる画像部分を路面に垂直な障害物として認識する認識手段とを備えた障害物認識装置を既に出願している(特願2007−186549号)。 Therefore, the applicant of the present application is a photographing unit capable of photographing a road surface viewed from a plurality of directions of the traveling vehicle, and an image obtaining unit for obtaining photographed images from a plurality of different directions of the photographing unit for substantially the same road surface area. An identification unit that extracts and identifies a predetermined monitoring range included in a captured image from a plurality of directions acquired by the image acquisition unit, and an image that differs between the captured images in the monitoring range depending on the identification result of the identification unit An obstacle recognition apparatus having a recognition means for recognizing a portion as an obstacle perpendicular to the road surface has already been filed (Japanese Patent Application No. 2007-186549).
この既出願の障害物認識装置は、例えば時刻tmに撮影手段が撮影した自車前方の一定の路面領域の撮影画像(前方画像)と、自車がその路面領域を通過して時刻tnに撮影手段が撮影した自車後方の略同じ路面領域の撮影画像(後方画像)とにつき、同定手段により、略同じ監視範囲を抽出してその画像の異同(濃淡の差)から路上の障害物を同定する。 The obstacle recognition device of the already-applied application is, for example, a photographed image (front image) of a certain road surface area in front of the own vehicle photographed by the photographing means at time tm, and photographed at time tn when the own vehicle passes through the road surface region. For the captured image (rear image) of the same road surface area behind the host vehicle taken by the means, the identification means extracts almost the same monitoring range and identifies obstacles on the road from the difference in the images (difference in shade) To do.
すなわち、自車前方に障害物があれば、前方画像の監視範囲には障害物が存在するが、後方画像監視範囲には障害物が存在しない状態になる。逆に、自車後方に障害物があれば、前方画像の監視範囲には障害物が存在しないが、後方画像の監視範囲には障害物が存在する状態になる。そして、自車の前方または後方に障害物が存在するときは、前記前方画像と前記後方画像のいずれか一方の監視範囲にのみ障害物が含まれ、両画像の監視範囲の内容が大きく異なることから、前記既出願の障害物認識装置は、オプティカルフローを検出する複雑な画像処理等を行なうことなく、簡単な画像の濃淡(明暗)の差から自車の前方または後方の障害物を容易に安定して確実に認識することができる。 That is, if there is an obstacle ahead of the host vehicle, there is an obstacle in the monitoring range of the front image, but there is no obstacle in the rear image monitoring range. On the contrary, if there is an obstacle behind the host vehicle, there is no obstacle in the monitoring range of the front image, but there is an obstacle in the monitoring range of the rear image. And when an obstacle exists ahead or behind the host vehicle, the obstacle is included only in the monitoring range of either the front image or the rear image, and the contents of the monitoring range of both images are greatly different. Therefore, the obstacle recognition device of the already-applied application can easily detect an obstacle in front of or behind the vehicle from the difference in light and shade (brightness and darkness) of a simple image without performing complicated image processing to detect an optical flow. It can be recognized stably and reliably.
しかしながら、前記既出願の障害物認識装置の場合、例えば前方画像と後方画像は、撮影時の光の当たり具合の変化等により路面の同じ道路標示等の部分であっても濃淡が異なる。また、濃淡の差を検出する際に画像間に僅かであっても位置ずれが生じる。そのため、前記道路標示等の部分を障害物として誤認識する可能性がある。 However, in the case of the obstacle recognition device of the above-mentioned application, for example, the front image and the rear image have different shades even in a portion such as the same road marking on the road surface due to a change in the degree of light hit at the time of photographing. In addition, even when there is a slight difference between images when detecting a difference in shading, a positional deviation occurs. Therefore, there is a possibility that a part such as the road sign is erroneously recognized as an obstacle.
本発明は、撮影画像の簡単な処理により道路標示等の部分の誤認識なく精度よく確実に障害物を認識することを目的とする。 An object of the present invention is to recognize an obstacle accurately and reliably without erroneous recognition of a portion such as a road marking by simple processing of a captured image.
上記した目的を達成するために、本願の障害物認識装置は、走行する自車からみた複数方向の路面撮影が可能な撮影手段と、略同じ路面領域についての前記撮影手段の異なる複数方向からの撮影画像を取得する画像取得手段と、前記画像取得手段により取得された各撮影画像を微分して2値化する微分2値化手段と、前記微分2値化手段の微分2値化後の各撮影画像に対して膨張処理を行う膨張手段と、前記膨張手段の膨張処理後の各撮影画像間の異なる部分の画像を出力する演算手段と、前記演算手段から出力された画像に対して収縮処理を行う収縮手段と、前記収縮手段の収縮処理後の画像から路面に垂直な障害物を認識する認識手段とを備えたことを特徴としている(請求項1)。 In order to achieve the above-described object, the obstacle recognition device of the present application includes a photographing unit capable of photographing a road surface in a plurality of directions as viewed from the traveling vehicle, and a plurality of directions of the photographing unit for substantially the same road surface region. Image acquisition means for acquiring a photographed image, differential binarization means for differentiating and binarizing each captured image acquired by the image acquisition means, and each of the differential binarization means after the differential binarization Expansion means for performing expansion processing on a photographed image, calculation means for outputting images of different portions between the captured images after expansion processing of the expansion means, and contraction processing for the image output from the calculation means And a recognizing means for recognizing an obstacle perpendicular to the road surface from the image after the shrinking process of the shrinking means.
また、本願の障害物認識装置は、前記画像取得手段の取得後の各撮影画像に対して射影変換を行う射影変換手段をさらに備え、前記射影変換手段の射影変換後の各撮影画像に基づき、前記演算手段が前記異なる部分の画像を出力することを特徴としている(請求項2)。 The obstacle recognition device of the present application further includes projective conversion means for performing projective transformation on each captured image obtained by the image acquiring means, based on each captured image after the projective conversion of the projective conversion means, The computing means outputs the image of the different portion (claim 2).
請求項1の発明の場合、画像取得手段により取得された各撮影画像は、微分2値化手段により微分して2値化され、微分2値化画像に変換される。 In the case of the first aspect of the present invention, each captured image acquired by the image acquisition means is differentiated and binarized by the differential binarization means and converted into a differential binarized image.
そして、前記微分2値化画像は膨張手段の膨張処理によって線幅が広くなる。そのため、例えば撮影された前方画像と後方画像の同じ道路標示の部分は多少の位置ずれや濃淡差があっても、微分2値化と膨張修理とによってそれらの誤差が吸収されて略一致し、演算手段から収縮手段に出力される画像(例えば前方画像と後方画像との異なる部分の画像)は、前記道路標示の部分がほとんど消えた画像になる。 The differential binarized image has a wide line width due to the expansion process of the expansion means. Therefore, for example, even if the same road marking portion of the photographed front image and the rear image has a slight positional shift or lightness difference, those errors are absorbed and substantially matched by differential binarization and expansion repair, An image output from the calculation means to the contraction means (for example, an image of a different portion between the front image and the rear image) is an image in which the portion of the road marking has almost disappeared.
さらに、収縮手段の収縮処理により線幅が元に戻されることにより、収縮処理後の画像は、前記道路標示の細く残っていた部分も消え、例えば撮影された前方画像と後方画像のいずれか一方にしか存在しない車両等の路面に垂直な障害物の画像部分のみが残る。 Furthermore, when the line width is restored to the original by the contraction process of the contraction means, the thinned portion of the road marking disappears in the image after the contraction process, for example, either one of the photographed front image and rear image Only the image portion of the obstacle perpendicular to the road surface of the vehicle or the like that exists only in the area remains.
そのため、認識手段により収縮手段の収縮処理後の画像から障害物を確実に認識することができ、撮影画像の簡単な処理により路面の道路標示等を誤認識することなく、精度よく確実に路面に垂直な車両等の障害物を認識することができる。 Therefore, the recognition means can reliably recognize the obstacle from the image after the contraction process of the contraction means, and the road surface can be accurately and reliably recognized without erroneously recognizing road markings on the road surface by simple processing of the captured image. Obstacles such as vertical vehicles can be recognized.
また、請求項2の発明によれば、射影変換手段の射影変換により、例えば前方画像と後方画像とにつき、それぞれの撮影範囲を俯瞰(鳥瞰)した画像が得られる。そのため、前方画像と後方画像との重ね合わせの処理や膨張処理等が一層容易に行える利点がある。 According to the invention of claim 2, by projective transformation of the projective transformation means, for example, for the front image and the rear image, an image obtained by bird's-eye view (bird's-eye view) of each photographing range can be obtained. Therefore, there is an advantage that the process of superimposing the front image and the rear image, the expansion process, and the like can be performed more easily.
つぎに、本発明をより詳細に説明するため、一実施形態について、図1〜図10を参照して詳述する。 Next, in order to describe the present invention in more detail, an embodiment will be described in detail with reference to FIGS.
図1は自車1に搭載された障害物認識装置2のブロック構成を示し、図2は障害物認識装置2の撮像手段の構成例を示す。図3は障害物認識装置2の動作説明用のフローチャートである。図4、図5及び図7〜図10は障害物認識装置2の処理を説明する画像例を示し、図6は射影変換の説明図である。 FIG. 1 shows a block configuration of an obstacle recognition device 2 mounted on the host vehicle 1, and FIG. 2 shows a configuration example of an imaging means of the obstacle recognition device 2. FIG. 3 is a flowchart for explaining the operation of the obstacle recognition apparatus 2. 4, 5, and 7 to 10 show examples of images for explaining the processing of the obstacle recognition apparatus 2, and FIG. 6 is an explanatory diagram of projective transformation.
図1に示すように、車両1に搭載された障害物認識装置2は、撮影手段3と、その撮影画像を処理して同定するマイクロコンピュータ構成の画像処理手段4と、撮影画像等を書き換え自在に蓄積するデータ蓄積手段5によって構成される。 As shown in FIG. 1, an obstacle recognition device 2 mounted on a vehicle 1 has a photographing unit 3, an image processing unit 4 having a microcomputer configuration for processing and identifying the photographed image, and a photographed image and the like can be rewritten. The data storage means 5 stores the data.
撮影手段3は、走行する自車1の複数方向からみた路面撮影が可能なモノクロ或いはカラーの単眼カメラからなり、撮影した時々刻々のフレーム画像(又はフィールド画像)を出力する。 The photographing means 3 is composed of a monochrome or color monocular camera capable of photographing a road surface viewed from a plurality of directions of the traveling vehicle 1 and outputs a frame image (or field image) taken every moment.
そして、複数方向とは、自車1の前、後、左、右の全部又は一部の方向(2方向以上)であり、本実施形態においては、説明を簡単にするため、撮影手段3の撮影方向を最も実用的な自車1の前、後の2方向である。 The plurality of directions are all or a part of the front, rear, left, and right of the vehicle 1 (two or more directions). In the present embodiment, in order to simplify the description, The shooting directions are the two directions before and after the most practical vehicle 1.
また、自車1の前、後の2方向の路面撮影を行なうため、撮影手段3は図2に示すように、例えば自車1の前方監視用のインナミラー前方の単眼カメラ3aと、後方監視用のバックドア上方の単眼カメラ3bとからなる。 In addition, in order to perform two-way road surface photographing in front of and behind the vehicle 1, the photographing means 3 includes, for example, a monocular camera 3a in front of the inner mirror for monitoring the front of the vehicle 1, and a rear monitoring as shown in FIG. And a monocular camera 3b above the back door.
なお、両単眼カメラ3a、3bは同じものであり、例えば動画像を1/30秒間隔で静止画に切り分けて出力する。また、単眼カメラ3a、3bの設置角度は、射影変換を行った時に車両脇の白線が多く使えるように、無限遠点を画像上部に含む角度に設定される。 The monocular cameras 3a and 3b are the same, and for example, a moving image is divided into still images at 1/30 second intervals and output. In addition, the installation angle of the monocular cameras 3a and 3b is set to an angle including an infinite point at the top of the image so that many white lines on the side of the vehicle can be used when projective transformation is performed.
つぎに、画像処理手段4は本発明の画像取得手段、射影変換手段、微分2値化手段、膨張手段、演算手段、収縮手段、認識手段を形成し、自車1の走行中に図3のステップS1〜S9の認識処理プログラムをくり返し実行する。 Next, the image processing means 4 forms the image acquisition means, projective transformation means, differential binarization means, expansion means, calculation means, contraction means, and recognition means of the present invention. The recognition processing program in steps S1 to S9 is repeatedly executed.
前記画像取得手段は、略同じ路面領域についての撮影手段3の異なる複数方向からの撮影画像を取得する手段であり、図3のステップS1、S2において、時刻taに単眼カメラ3aが撮影した図4の自車1の前方の路面領域の撮影画像(以下、前方画像という)F1を取り込んでデータ蓄積手段5に保持し、車速センサ(図示せず)の自車速等から決定した微小時間Δtだけ遅れた時刻tbに単眼カメラ3bが撮影した図4の自車1の後方の略同じ路面領域の撮影画像(以下、後方画像という)B1を取り込んでデータ蓄積手段5に保持する。なお、図4の前方画像F1は道路標示aのみを含み、後方画像B1は路面に垂直な障害物としての後行の車両bが存在して道路標示aと車両bを含む。 The image acquisition means is means for acquiring photographed images from a plurality of different directions of the photographing means 3 for substantially the same road surface area, and FIG. 4 is taken by the monocular camera 3a at time ta in steps S1 and S2 of FIG. A captured image (hereinafter referred to as a forward image) F1 of the road surface area ahead of the host vehicle 1 is captured and held in the data storage means 5, and is delayed by a minute time Δt determined from the host vehicle speed or the like of a vehicle speed sensor (not shown). At the time tb, a captured image (hereinafter referred to as a rear image) B1 of substantially the same road surface area behind the host vehicle 1 in FIG. 4 captured by the monocular camera 3b is captured and held in the data storage means 5. 4 includes only the road marking a, and the rear image B1 includes the following vehicle b as an obstacle perpendicular to the road surface and includes the road marking a and the vehicle b.
前記射影変換手段は、本実施形態の場合、図3のステップS3により前方画像F1及び後方画像B1に射影変換を行う。 In the case of this embodiment, the projective transformation means performs projective transformation on the front image F1 and the rear image B1 in step S3 of FIG.
この射影変換は図6に示すように、ある平面L上の点 P(x、 y)を投影中心 Oに関して、他の平面 L0上の点 P0(u、v)として投影する周知の座標変換であり、単眼カメラ3a、3bの撮影路面が平面と仮定できる道路環境にあっては、任意の視点から見た画像に射影変換することができる。 As shown in FIG. 6, this projective transformation is a well-known coordinate transformation in which a point P (x, y) on a plane L is projected as a point P0 (u, v) on another plane L0 with respect to the projection center O. Yes, in a road environment where the photographing road surface of the monocular cameras 3a and 3b can be assumed to be a plane, it can be projectively transformed into an image viewed from an arbitrary viewpoint.
そして、時刻taに前方状況を単眼カメラ3aで撮影した前方画像F1と、それからΔt遅れた時刻tbに後方状況を単眼カメラ3bで撮影した後方画像B1とにつき、道路面を平面と仮定して射影変換すると、図4の射影変換後の前方画像F2、後方画像B2が得られる。 The projection is performed on the assumption that the road surface is a plane with respect to the front image F1 obtained by photographing the front situation with the monocular camera 3a at the time ta and the rear image B1 obtained by photographing the rear situation with the monocular camera 3b at the time tb later than the time tb. When converted, the front image F2 and the rear image B2 after the projective transformation shown in FIG. 4 are obtained.
なお、前方画像F2と後方画像B2を用いた画像の重ね合わせによって車両b等の障害物を認識するため、実際には、前方画像F2、後方画像F3は正規化して重ね合わせが可能なように形成することが望ましい。そのため、(i)単眼カメラ3a、3bにより同じ領域を撮影する。(ii)単眼カメラ3a、3bの角度、高さを用いて道路面は完全な平面であるという仮定のもとに、射影変換によって真上から見た画像を生成する。このとき、ハフ変換を用いて常に車線の両脇の白線を抽出し、平行になる様に角度の調整を行う。(iii)ハフ変換によって得られた白線の直線成分によって前方画像F2と後方画像B2との車線の幅と向きを合わせる。(vi)自車1から移動した距離は車速等から把握できるので、前方画像F2と後方画像B2とについて、テンプレートマッチング等により、走行車線よりやや大きく取った領域の正確な位置合わせを行う。 Since the obstacle such as the vehicle b is recognized by superimposing the images using the front image F2 and the rear image B2, the front image F2 and the rear image F3 are actually normalized so that they can be superimposed. It is desirable to form. Therefore, (i) the same area is imaged by the monocular cameras 3a and 3b. (Ii) Using the angles and heights of the monocular cameras 3a and 3b, an image viewed from directly above is generated by projective transformation on the assumption that the road surface is a complete plane. At this time, the white lines on both sides of the lane are always extracted using the Hough transform, and the angle is adjusted so as to be parallel. (Iii) The width and direction of the lanes of the front image F2 and the rear image B2 are matched with the straight line component of the white line obtained by the Hough transform. (Vi) Since the distance moved from the own vehicle 1 can be grasped from the vehicle speed or the like, the front image F2 and the rear image B2 are accurately aligned in an area slightly larger than the traveling lane by template matching or the like.
このようにして得られた前方画像F2と後方画像B2を重ね合わせて濃淡の差分をとると、図7の差分画像FB2が得られるが、差分画像FB2は単眼カメラ3a、3bに近い領域は暗くなり、太陽光等の照明光に対する単眼カメラ3a、3bの向きによっても濃淡が大きく変化する。また、前記濃淡の変化の具合は道路面の材質等によっても異なる。そのため、差分画像FB2からは前方画像F2と後方画像B2のいずれか一方にのみ存在する車両b等の障害物を認識できない可能性がある。この点は、射影変換前の前方画像F1と後方画像B1の濃淡の差分画像から障害物を認識する場合も同様である。 When the front image F2 and the rear image B2 obtained in this way are overlapped to obtain a difference in density, the difference image FB2 of FIG. 7 is obtained, but the difference image FB2 is dark in the area close to the monocular cameras 3a and 3b. Therefore, the shade varies greatly depending on the direction of the monocular cameras 3a and 3b with respect to illumination light such as sunlight. In addition, the degree of change in the shade varies depending on the material of the road surface. Therefore, there is a possibility that an obstacle such as the vehicle b existing only in one of the front image F2 and the rear image B2 cannot be recognized from the difference image FB2. The same applies to the case where an obstacle is recognized from the difference image of the shade between the front image F1 and the rear image B1 before projective transformation.
前記微分2値化手段は、画像取得手段が取得した複数方向からの撮影画像又はそれらの射影変換画像を微分して2値化する手段であり、射影変換手段を備える本実施形態の場合、図3のステップS4により前方画像F2、後方画像B2を微分処理して濃淡を強調し、その後、所定の閾値で二値化し、図4の微分2値化した前方画像F3、後方画像B3を形成する。なお、前記微分処理においては、路面の細かい凸凹に基づく不要な微分値が生じないように平滑化を施してから前方画像F2、後方画像B2の微分値をとることが好ましい。 The differential binarization means is means for differentiating and binarizing the captured images from a plurality of directions acquired by the image acquisition means or their projective transformation images, and in the case of this embodiment including the projective transformation means, FIG. In step S4 of FIG. 3, the front image F2 and the rear image B2 are differentiated to enhance the density, and then binarized with a predetermined threshold to form the differentiated binarized front image F3 and rear image B3 of FIG. . In the differentiation process, it is preferable to obtain the differential values of the front image F2 and the rear image B2 after performing smoothing so that unnecessary differential values based on the unevenness of the road surface are not generated.
前記膨張手段は、図3のステップS5により前方画像F3、後方画像B3に対して膨張処理を行い、それぞれの論理「1」の画素が形成する線幅を太くし、図4の膨張処理後の前方画像F4、後方画像B4を形成する。なお、膨張処理は、簡単には、前方画像F3、後方画像B3それぞれを例えば4画素又は8画素の適当な小領域に分割し、各小領域の各画素の1つでも「1」であればその小領域の全画素を「1」にすることで実現される。 The expansion means performs an expansion process on the front image F3 and the rear image B3 in step S5 of FIG. 3 to increase the line width formed by the respective logic “1” pixels, and after the expansion process of FIG. A front image F4 and a rear image B4 are formed. In the expansion process, for example, each of the front image F3 and the rear image B3 is divided into appropriate small areas of, for example, 4 pixels or 8 pixels, and if even one of the pixels in each small area is “1”. This is realized by setting all the pixels in the small area to “1”.
前記演算手段は図3のステップS6により前方画像F4、後方画像B4の排他的論理和を演算し、前方画像F4と後方画像B4の濃淡が異なる部分の画像として、図5の合成画像(排他的論理和画像)FB4を出力する。 The computing means computes an exclusive OR of the front image F4 and the rear image B4 in step S6 of FIG. 3, and the synthesized image (exclusive image of FIG. 5) is obtained as an image of a portion having different shades of the front image F4 and the rear image B4. Logical OR image) FB4 is output.
なお、前方画像F2、F3、F4は下側が自車1側であり、後方画像B2、B3、B4は上側が自車1側である。 The lower side of the front images F2, F3, and F4 is the own vehicle 1, and the upper side of the rear images B2, B3, and B4 is the own vehicle 1 side.
そして、合成画像FB4の道路標示aの部分は、前方画像F4と後方画像B4が重なり、ほとんどの部分が打ち消しあい、重ならなかった端部が細く残る程度になる。また、前方画像F1、後方画像B1の異なる部分、すなわち、車両bの部分は膨張処理された太い輪郭線になる。なお、排他的論理和の演算に代えて、両画像F3、B3の差を求める演算を行ってもよく、この場合、例えば演算結果が「−1」となる画素を「0」とすることで合成画像FB4と同じ画像が得られる。 Then, in the portion of the road sign a of the composite image FB4, the front image F4 and the rear image B4 overlap, most of the portions cancel each other, and the end portions that do not overlap remain thin. Further, different portions of the front image F1 and the rear image B1, that is, the portion of the vehicle b are thick outlines that have been subjected to expansion processing. In addition, instead of the exclusive OR operation, an operation for obtaining the difference between the images F3 and B3 may be performed. In this case, for example, by setting the pixel whose operation result is “−1” to “0”. The same image as the composite image FB4 is obtained.
前記収縮手段は、図3のステップS7により合成画像FB4に前記膨張処理と逆の収縮処理を施し、合成画像FB4の線幅を細くして戻し、合成画像FB4の道路標示aの部分に細く残っていた部分も消えた図5の合成画像(収縮画像)FB5を形成する。 The contraction means performs a contraction process opposite to the expansion process on the composite image FB4 in step S7 of FIG. 3, narrows the line width of the composite image FB4, and remains thin in the road marking a portion of the composite image FB4. The composite image (contraction image) FB5 of FIG.
前記認識手段は、図3のステップS8により合成画像FB5の車両bの部分を路面に垂直な障害物として認識し、認識結果を自車1の被害軽減や衝突回避の衝突予測処理手段(図示せず)等に送る。 The recognition means recognizes the portion of the vehicle b in the composite image FB5 as an obstacle perpendicular to the road surface in step S8 of FIG. 3, and recognizes the recognition result as a collision prediction processing means (not shown) for reducing damage to the own vehicle 1 or avoiding collision. Z)).
すなわち、一般に微分2値化画像に膨張処理と収縮処理を施すと、微分2値化画像の点や線は膨張した後に収縮して元に戻る。そして、2枚の微分2値化画像の座標距離がd離れた直線をd+αずつ膨張させて合成すると、2つの微分値領域は幅2αで重なる。また、d+α回膨張させた2枚の微分2値化画像の排他的論理和をとると、いずれか一方の微分2値化画像にしか存在しない認識対象物体のエッジ(異なる画像部分)は膨張処理しても重ならずに残り、両方の微分2値化画像に共通に存在する道路標示等の画像部分は重なりあって細め合う。さらに、両微分2値化画像の排他的論理和をとって得られた異なる部分の画像をd+α回収縮させると、細め合った画像部分は消え、前記認識対象物体のエッジ部分は元に戻って残る。 That is, in general, when a differential binarized image is subjected to an expansion process and a contraction process, the points and lines of the differential binarized image expand and contract to return to their original state. Then, when a straight line having a coordinate distance of d between two differential binary images is expanded by d + α and synthesized, the two differential value regions overlap with each other with a width 2α. Further, when an exclusive logical sum of two differential binarized images expanded by d + α times is taken, an edge (a different image portion) of a recognition target object that exists only in one of the differential binarized images is expanded. Even if they do not overlap, image portions such as road markings that remain in common in both differential binarized images overlap and become thin. Further, when the images of different parts obtained by taking the exclusive OR of both differential binary images are contracted d + α times, the thinned image parts disappear and the edge part of the recognition target object returns to the original. Remain.
そして、本実施形態の障害物認識装置2は、微分画像に対する上記の膨張と収縮の特性を利用し、同じ道路面の微分2値化画像である前方画像F3と後方画像B3を膨張処理して前方画像F4と後方画像B4との排他的論理和の合成画像FB4を得る。このとき、合成画像FB4は後方画像B3にしか存在しない認識対象の車両bのエッジは膨張しても重ならずに残り、前方画像F3と後方画像B3の両方に存在する共通の道路標示aの部分は重なりあって細め合う。さらに、合成画像FB4に膨張処理と同じ回数の収縮処理を施し、道路標示aが消えて車両bが元に戻って残った合成画像FB5を得、この合成画像FB5から障害物としての車両bを認識する。 Then, the obstacle recognition device 2 of the present embodiment uses the above-described expansion and contraction characteristics with respect to the differential image, and performs an expansion process on the front image F3 and the rear image B3 that are differential binarized images of the same road surface. A composite image FB4 of exclusive OR of the front image F4 and the rear image B4 is obtained. At this time, the composite image FB4 remains in the recognition target vehicle b, which exists only in the rear image B3, without overlapping even if it is inflated, and the common road marking a existing in both the front image F3 and the rear image B3. The parts overlap and narrow. Further, the composite image FB4 is subjected to the same number of contraction processes as the expansion process, the road sign a disappears and the vehicle b returns to its original state to obtain a composite image FB5, and the vehicle b as an obstacle is obtained from the composite image FB5. recognize.
この場合、収縮処理後の合成画像5は、道路標示aの細く残っていた部分も消え、前方画像F1と後方画像B1のいずれか一方にしか存在しない障害物としての車両bの部分のみが残る。そのため、認識手段は障害物としての車両bを精度よく確実に認識することができ、撮影画像の簡単な処理により、路面の道路標示a等を誤認識することなく、路面に垂直な障害物を精度よく確実に認識することができる。 In this case, in the composite image 5 after the contraction process, the thin remaining portion of the road sign a disappears, and only the portion of the vehicle b as an obstacle that exists only in one of the front image F1 and the rear image B1 remains. . Therefore, the recognizing means can accurately and accurately recognize the vehicle b as an obstacle, and the obstacle perpendicular to the road surface can be recognized by simple processing of the captured image without erroneously recognizing the road marking a on the road surface. It can be recognized accurately and reliably.
また、射影変換手段を備えたため、前方画像F1、後方画像B1の撮影範囲を俯瞰(鳥瞰)した前方画像F2、後方画像B2に基づいて膨張処理や排他的論理和の処理等が容易に行える。 In addition, since the projection conversion means is provided, expansion processing, exclusive OR processing, and the like can be easily performed based on the front image F2 and the rear image B2 obtained by bird's-eye view of the imaging ranges of the front image F1 and the rear image B1.
なお、微分2値化画像である前方画像F3、後方画像B3の排他的論理和をとると、図8の合成製画像FB3が得られる。そして、合成製画像FB3と合成画像FB5との比較から明らかなように、前方画像F3、後方画像B3の排他的論理和をとるだけでは道路標示aの部分が残り、障害物の誤認識が生じる。 In addition, when the exclusive OR of the front image F3 and the back image B3 which are differential binarized images is taken, a composite image FB3 of FIG. 8 is obtained. Then, as is clear from the comparison between the composite image FB3 and the composite image FB5, the road marking a portion remains only by taking the exclusive OR of the front image F3 and the rear image B3, resulting in erroneous recognition of the obstacle. .
これは、道路面は完全な平面でなく、理想的な平面から離れる程、射影変換画像が歪むため、ハフ変換によって抽出する白線が歪んでしまい、前方画像F3、後方画像B3を正確に重ね合わせることが困難になるからである。 This is because the road surface is not a perfect plane, and the projection transformation image is distorted as it is far from the ideal plane, so the white line extracted by the Hough transformation is distorted, and the front image F3 and the rear image B3 are accurately superimposed. This is because it becomes difficult.
つぎに、自車1の前、後に車両が存在しない状況で実験したところ、図9、図10の結果が得られた。図9、図10において、F11、F21、F41は前方画像F1、F2、F4に対応する前方画像、B11、B21、B41は後方画像B1、B2、B4に対応する後方画像、FB41、FB51は合成画像FB4、FB5に対応する合成画像である。 Next, when an experiment was conducted in a situation where no vehicle was present before and after the host vehicle 1, the results of FIGS. 9 and 10 were obtained. 9 and 10, F11, F21, and F41 are front images corresponding to the front images F1, F2, and F4, B11, B21, and B41 are rear images corresponding to the rear images B1, B2, and B4, and FB41 and FB51 are synthesized. It is a composite image corresponding to images FB4 and FB5.
そして、障害物となる車両b等が存在しなければ、前方画像F41、後方画像B41に微分2値化した道路標示aの膨張処理画像が含まれていても、膨張処理、排他的論理和の処理及び、収縮処理により、最終的に得られる合成画像FB51は略完全に画像部分が消えた画像になることが確かめられた。 If there is no vehicle b or the like that is an obstacle, even if the front image F41 and the rear image B41 include the expansion processing image of the road sign a that is differentiated and binarized, the expansion processing and exclusive OR are performed. It was confirmed that the combined image FB51 finally obtained by the processing and the contraction processing is an image in which the image portion is almost completely erased.
そして、本発明は上記した実施形態に限定されるものではなく、その趣旨を逸脱しない限りにおいて上述したもの以外に種々の変更を行なうことが可能である。 The present invention is not limited to the above-described embodiment, and various modifications other than those described above can be made without departing from the spirit of the present invention.
例えば、前記実施形態においては、射影変換を微分2値化前に行ったが、射影変換は微分2値化後等のどのタイミングで行ってもよい。 For example, in the embodiment, the projective transformation is performed before the differential binarization, but the projective transformation may be performed at any timing such as after the differential binarization.
つぎに、前記実施形態においては、自車1の前方、後方から路面を撮影する場合に適用したが、撮影方向は前方と後方に限られるものではなく、例えば、前方または後方と左、右いずれかの側方との2方向、或いは、前、後、左、右のいずれか3方向以上であってもよい。 Next, in the above-described embodiment, the present invention is applied to the case where the road surface is photographed from the front and the rear of the own vehicle 1, but the photographing direction is not limited to the front and the rear. There may be two directions with the side, or three or more directions of front, back, left, and right.
そして、本発明は、種々の車両の障害物認識に適用することができる。 The present invention can be applied to obstacle recognition of various vehicles.
1 自車
2 障害物認識装置
3 撮影手段
4 画像処理手段
b 障害物としての車両
DESCRIPTION OF SYMBOLS 1 Own vehicle 2 Obstacle recognition apparatus 3 Imaging | photography means 4 Image processing means b Vehicle as an obstruction
Claims (2)
略同じ路面領域についての前記撮影手段の異なる複数方向からの撮影画像を取得する画像取得手段と、
前記画像取得手段により取得された各撮影画像を微分して2値化する微分2値化手段と、
前記微分2値化手段の微分2値化後の各撮影画像に対して膨張処理を行う膨張手段と、
前記膨張手段の膨張処理後の各撮影画像間の異なる部分の画像を出力する演算手段と、
前記演算手段から出力された画像に対して収縮処理を行う収縮手段と、
前記収縮手段の収縮処理後の画像から路面に垂直な障害物を認識する認識手段とを備えたことを特徴とする障害物認識装置。 Photographing means capable of photographing the road surface in multiple directions as seen from the traveling vehicle;
Image acquisition means for acquiring captured images from a plurality of different directions of the imaging means for substantially the same road surface area;
Differential binarization means for differentiating and binarizing each captured image acquired by the image acquisition means;
Expansion means for performing expansion processing on each captured image after differential binarization by the differential binarization means;
A computing means for outputting an image of a different part between each captured image after the expansion processing of the expansion means;
Contraction means for performing contraction processing on the image output from the calculation means;
An obstacle recognition apparatus comprising: a recognition means for recognizing an obstacle perpendicular to the road surface from the image after the shrinking process of the shrinking means.
前記画像取得手段の取得後の各撮影画像に対して射影変換を行う射影変換手段をさらに備え、
前記射影変換手段の射影変換後の各撮影画像に基づき、前記演算手段が前記異なる部分の画像を出力することを特徴とする障害物認識装置。 The obstacle recognition apparatus according to claim 1,
Further comprising projective transformation means for performing projective transformation on each captured image after acquisition by the image acquisition means,
The obstacle recognizing apparatus, wherein the computing means outputs the image of the different part based on each photographed image after the projective transformation of the projective transforming means.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2008113922A JP2009265891A (en) | 2008-04-24 | 2008-04-24 | Obstacle recognition device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2008113922A JP2009265891A (en) | 2008-04-24 | 2008-04-24 | Obstacle recognition device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| JP2009265891A true JP2009265891A (en) | 2009-11-12 |
Family
ID=41391681
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| JP2008113922A Pending JP2009265891A (en) | 2008-04-24 | 2008-04-24 | Obstacle recognition device |
Country Status (1)
| Country | Link |
|---|---|
| JP (1) | JP2009265891A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104567872A (en) * | 2014-12-08 | 2015-04-29 | 中国农业大学 | Extraction method and system of agricultural implements leading line |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH03282707A (en) * | 1990-03-30 | 1991-12-12 | Mazda Motor Corp | Environment recognition device for mobile vehicle |
| JP2002133419A (en) * | 2000-10-19 | 2002-05-10 | Mitsubishi Heavy Ind Ltd | Method and device for extracting object from image |
| JP2007172501A (en) * | 2005-12-26 | 2007-07-05 | Alpine Electronics Inc | Vehicle driving support apparatus |
| JP2008034981A (en) * | 2006-07-26 | 2008-02-14 | Fujitsu Ten Ltd | Image recognition device and method, pedestrian recognition device and vehicle controller |
-
2008
- 2008-04-24 JP JP2008113922A patent/JP2009265891A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH03282707A (en) * | 1990-03-30 | 1991-12-12 | Mazda Motor Corp | Environment recognition device for mobile vehicle |
| JP2002133419A (en) * | 2000-10-19 | 2002-05-10 | Mitsubishi Heavy Ind Ltd | Method and device for extracting object from image |
| JP2007172501A (en) * | 2005-12-26 | 2007-07-05 | Alpine Electronics Inc | Vehicle driving support apparatus |
| JP2008034981A (en) * | 2006-07-26 | 2008-02-14 | Fujitsu Ten Ltd | Image recognition device and method, pedestrian recognition device and vehicle controller |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104567872A (en) * | 2014-12-08 | 2015-04-29 | 中国农业大学 | Extraction method and system of agricultural implements leading line |
| CN104567872B (en) * | 2014-12-08 | 2018-09-18 | 中国农业大学 | A kind of extracting method and system of agricultural machinery and implement leading line |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11170466B2 (en) | Dense structure from motion | |
| KR102384175B1 (en) | Camera device for vehicle | |
| US7684590B2 (en) | Method of recognizing and/or tracking objects | |
| US11482015B2 (en) | Method for recognizing parking space for vehicle and parking assistance system using the method | |
| JP5108605B2 (en) | Driving support system and vehicle | |
| JP4973736B2 (en) | Road marking recognition device, road marking recognition method, and road marking recognition program | |
| KR101243108B1 (en) | Apparatus and method for displaying rear image of vehicle | |
| US20150042799A1 (en) | Object highlighting and sensing in vehicle image display systems | |
| US10108866B2 (en) | Method and system for robust curb and bump detection from front or rear monocular cameras | |
| JP6569280B2 (en) | Road marking detection device and road marking detection method | |
| KR20200000953A (en) | Around view monitoring system and calibration method for around view cameras | |
| US11176397B2 (en) | Object recognition device | |
| JP4344860B2 (en) | Road plan area and obstacle detection method using stereo image | |
| JP5521217B2 (en) | Obstacle detection device and obstacle detection method | |
| Raguraman et al. | Intelligent drivable area detection system using camera and lidar sensor for autonomous vehicle | |
| JP5091897B2 (en) | Stop line detector | |
| WO2011016257A1 (en) | Distance calculation device for vehicle | |
| CN105291982B (en) | Stopping thread detector rapidly and reliably | |
| WO2018146997A1 (en) | Three-dimensional object detection device | |
| Hwang et al. | Vision-based vehicle detection and tracking algorithm design | |
| CN113632450B (en) | Imaging system and image processing apparatus | |
| EP3168779A1 (en) | Method for identifying an incoming vehicle and corresponding system | |
| WO2022009537A1 (en) | Image processing device | |
| JP2009265891A (en) | Obstacle recognition device | |
| KR20180069282A (en) | Method of detecting traffic lane for automated driving |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20101213 |
|
| A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20120202 |
|
| A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20120207 |
|
| A521 | Written amendment |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20120316 |
|
| A02 | Decision of refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A02 Effective date: 20120703 |