WO2019156072A1 - Dispositif d'estimation d'attitude - Google Patents
Dispositif d'estimation d'attitude Download PDFInfo
- Publication number
- WO2019156072A1 WO2019156072A1 PCT/JP2019/004065 JP2019004065W WO2019156072A1 WO 2019156072 A1 WO2019156072 A1 WO 2019156072A1 JP 2019004065 W JP2019004065 W JP 2019004065W WO 2019156072 A1 WO2019156072 A1 WO 2019156072A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- road surface
- unit
- points
- posture
- captured image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/26—Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- This disclosure relates to a posture estimation device that estimates the posture of an imaging unit.
- Patent Document 1 As a technique for estimating the posture of an imaging unit mounted on a vehicle while the vehicle is running, an optical flow of feature points on a road surface is obtained, and a homography matrix is estimated from the optical flow. Thus, a technique for estimating the posture of the imaging unit is disclosed.
- One aspect of the present disclosure is to provide a technique for improving accuracy in a posture estimation device that estimates the posture of an imaging unit.
- the posture estimation apparatus includes an image acquisition unit, a feature point extraction unit, a basic estimation unit, and a posture estimation unit.
- the image acquisition unit is configured to repeatedly acquire each captured image obtained by one or a plurality of imaging units mounted on the moving object.
- the feature point extraction unit extracts a plurality of feature points as a plurality of pre-movement points from at least one of the repeatedly acquired captured images, and follows a time series more than a captured image obtained by extracting the plurality of pre-movement points.
- a plurality of feature points corresponding to a plurality of pre-movement points are extracted as a plurality of post-movement points from the captured image acquired later.
- the basic estimator is a matrix for transitioning a plurality of pre-movement points to a plurality of post-movement points, and represents a matrix when it is assumed that the moving object moves according to a preset trajectory and does not rotate. Configured to estimate a matrix.
- the posture estimation unit is configured to estimate the posture of one or a plurality of imaging units according to a basic matrix.
- the moving direction of the image pickup unit can be estimated by obtaining a simple basic matrix
- the rough posture of the image pickup unit can be estimated with a simple process.
- the display system 1 is a system that is mounted on a vehicle such as a passenger car and that combines images captured by a plurality of cameras and displays the images on a display unit.
- the camera posture is estimated and an image is displayed according to the posture. It has a function to correct.
- the display system 1 includes a control unit 10.
- the display system 1 may include a front camera 21F, a rear camera 21B, a right camera 21R, a left camera 21L, various sensors 22, a display unit 26, and the like.
- a vehicle equipped with the display system 1 is also referred to as a host vehicle.
- the front camera 21F, the rear camera 21B, the right camera 21R, and the left camera 21L are collectively referred to as a plurality of cameras 21.
- the front camera 21F and the rear camera 21B are attached to the front part and the rear part of the own vehicle, respectively, in order to image the road ahead and behind the own vehicle.
- the right camera 21R and the left camera 21L are attached to the right side surface and the left side surface of the host vehicle, respectively, in order to image the right and left roads of the host vehicle. That is, the plurality of cameras 21 are respectively arranged at different positions on the host vehicle.
- the plurality of cameras 21 are set so that a part of the imaging regions overlap each other, and a portion where the imaging regions overlap in the captured image is defined as an overlapping region.
- the various sensors 22 include, for example, a vehicle speed sensor, a yaw rate sensor, a rudder angle sensor, and the like.
- the various sensors 22 are used to detect whether or not the host vehicle is traveling in a steady manner such as a straight movement or a constant turning movement.
- the control unit 10 includes a microcomputer having a CPU 11 and a semiconductor memory (hereinafter, memory 12) such as a RAM or a ROM. Each function of the control unit 10 is realized by the CPU 11 executing a program stored in a non-transitional physical recording medium.
- the memory 12 corresponds to a non-transitional tangible recording medium that stores a program. Also, by executing this program, a method corresponding to the program is executed. Note that the non-transitional tangible recording medium means that the electromagnetic waves in the recording medium are excluded.
- the control unit 10 may include one microcomputer or a plurality of microcomputers.
- the control unit 10 includes a camera posture estimation unit 16 and an image drawing unit 17.
- the method of realizing the functions of the respective units included in the control unit 10 is not limited to software, and some or all of the functions may be realized using one or a plurality of hardware.
- the function is realized by an electronic circuit that is hardware, the electronic circuit may be realized by a digital circuit, an analog circuit, or a combination thereof.
- the camera posture estimation unit 16 estimates the postures of the plurality of cameras 21.
- the function as the image drawing unit 17 generates a bird's-eye image obtained by looking down the road around the vehicle from the vertical direction from images captured by the plurality of cameras 21. Then, the generated bird's-eye view image is displayed on the display unit 26 that is configured with a liquid crystal display or the like and is disposed in the vehicle interior.
- the coordinates that serve as the boundaries of the captured images according to the postures of the plurality of cameras 21 so that the boundaries of the images to be combined are in appropriate positions. Is set as appropriate.
- the control unit 10 temporarily stores captured images obtained from the plurality of cameras 21 in the memory 12.
- posture estimation processing executed by the control unit 10 will be described with reference to the flowcharts of FIGS. 2A and 2B.
- the posture estimation process is started, for example, when the host vehicle is performing steady running, and is repeatedly performed while performing steady running.
- the posture estimation processing in the present embodiment is performed when the host vehicle is moving straight as steady running.
- the straight-ahead movement indicates a state in which the traveling speed of the host vehicle is a speed equal to or higher than a preset threshold and the steering angle or the yaw rate is less than a preset threshold.
- the control unit 10 acquires images captured by the plurality of cameras 21, respectively. At this time, a captured image captured in the past, for example, a captured image captured 10 frames before is also acquired from the memory 12.
- control unit 10 extracts a plurality of feature points from each captured image, and detects an optical flow for the plurality of feature points.
- process after S120 is implemented for each of the plurality of cameras 21.
- the feature point represents an edge that becomes a boundary between luminance and chromaticity in the captured image.
- a point indicating a part that is relatively easy to track in image processing, such as a corner of a structure, among these edges, is adopted as a feature point. It is sufficient that at least two feature points can be extracted.
- the control unit 10 uses corners of structures such as buildings and corners of window glass as feature points. Many may be extracted.
- control unit 10 extracts a plurality of feature points as a plurality of pre-movement points from each of the repeatedly acquired captured images, and follows a time series more than the captured image from which the plurality of pre-movement points are extracted.
- a plurality of feature points corresponding to a plurality of pre-movement points are extracted from a captured image acquired later as a plurality of post-movement points.
- the control part 10 detects the line segment which connects the point after a movement from the point before a movement as an optical flow.
- the control unit 10 estimates a basic matrix E on the premise of linear motion.
- the basic matrix E is a matrix for transitioning a plurality of pre-movement points U 1 to a plurality of post-movement points U 2 , and the moving object moves according to a preset trajectory, in this embodiment, a linear motion, and Represents a matrix assuming no rotation.
- the vehicle coordinate system is defined as shown in FIG. That is, the road surface at the center of the vehicle is the origin, the right direction of the vehicle is positive on the X axis, the rear direction of the vehicle is positive on the Y axis, and the downward direction of the vehicle is positive on the Z axis.
- control unit 10 obtains the basic matrix E for each of the plurality of pre-movement points U 1 , and performs the optimization operation such as the least square method on the plurality of basic matrices E, so that the most probable.
- a basic matrix E is obtained.
- the control unit 10 estimates the camera posture. This process is configured to estimate the postures of the plurality of cameras 21 according to the basic matrix E. For example, the control unit 10 obtains the rotation component of the camera as follows.
- the control unit 10 first converts the translation vector V included in the basic matrix E into the world coordinate system using the rotation matrix R.
- a deviation angle dx about the X axis between the translation vector V W in the world coordinate system and the translation vector V in the vehicle coordinate system is calculated, and a rotation matrix Rx for correcting the deviation angle dx is obtained.
- Rx correct the rotation matrix R in Rz, and determines a rotation matrix R 2 corresponding to the camera angle after correction.
- the rotational component around the Y axis is a value before correction. If such a rotation matrix R 2 is multiplied by the basic matrix E and then an operation for changing a plurality of pre-movement points U 1 to a plurality of post-movement points U 2 is performed , a rotation component around the Y axis is taken into consideration. It is possible to estimate the camera posture.
- the rotation component around the Y axis may be ignored, a preset rotation matrix Ry may be used, or a calculation may be obtained as follows.
- the control unit 10 When obtaining the rotation matrix Ry, first, in S150, the control unit 10 extracts the road surface flow. That is, the control unit 10 is configured to extract a feature point on the road surface on which the moving object travels as a road surface front point from at least one of the repeatedly acquired captured images. In addition, the control part 10 recognizes the pixel which has the brightness
- control unit 10 extracts feature points corresponding to the road surface front point as a plurality of road surface back points from the captured image acquired later in time series than the captured image from which the road surface front point is extracted. Composed. And the control part 10 detects the line segment which connects a road surface back point from a road surface front point as a road surface flow which is an optical flow.
- the control unit 10 estimates the camera angle and the camera height from the vehicle movement amount.
- the basic matrix E, the road surface front point, and the road surface rear point are used to estimate the height of one or a plurality of cameras 21 with respect to the road surface and the rotation angle with respect to the road surface.
- control unit 10 performs the following process, for example. That is, using a rotation matrix R 2, obtaining the homography matrix H.
- a rotation matrix Ry about the Y axis that optimally satisfies the following expression is obtained using a known nonlinear optimization method such as Newton's method.
- d 1.
- the road surface normal vector Nw is obtained by specifying the position of a plane showing the road surface by a plurality of road surface flows and obtaining a vertical line with respect to the plane.
- one axis of the rotation matrix Ry is obtained separately from this processing.
- the process of obtaining the previous two axes does not require a road surface flow, and the two axes Rx and Rz can be obtained using the structure flow. For this reason, in this process, robustness can be improved as compared with the case of using road surface feature points that are less likely to produce contrast differences.
- the control unit 10 associates feature points in the overlapping area by the plurality of cameras 21. That is, the control unit 10 determines whether or not the feature points located in the overlap region among the feature points are feature points that represent the same object, and when the feature points represent the same object, Is associated with each captured image. The camera postures obtained in this way are accumulated and recorded in the memory 12.
- the control unit 10 estimates the relative position between the cameras. That is, the control unit 10 estimates the inter-camera relative position by recognizing a shift between the coordinates of the feature points located in the overlapping area and the coordinates corresponding to the corrected camera posture.
- control unit 10 acquires the cumulative camera posture as the cumulative camera posture.
- control unit 10 integrates the cumulative camera posture.
- the control unit 10 is configured to perform filtering on a plurality of posture estimation results and output the filtered values as the postures of the plurality of cameras 21.
- any filtering method such as simple average, weighted average, and least square method can be adopted.
- Image conversion is processing for generating a bird's-eye view image in consideration of the posture of the camera. Since the boundary for combining multiple captured images is set in consideration of the camera posture, one object in the imaging area may be displayed in two in the bird's eye view image, or no object may be displayed Can be suppressed.
- control unit 10 displays the bird's-eye view image as a video on the display unit 26, and then ends the posture estimation process of FIGS. 2A and 2B.
- the control unit 10 is configured to repeatedly acquire each captured image obtained by the one or more cameras 21 mounted on the moving object in S110.
- the control unit 10 extracts a plurality of feature points as a plurality of pre-movement points from at least one of the repeatedly acquired captured images, and more than the captured image obtained by extracting the plurality of pre-movement points.
- a plurality of feature points corresponding to a plurality of pre-movement points are extracted as a plurality of post-movement points from a captured image acquired later in time series.
- control unit 10 is a matrix for transitioning a plurality of pre-movement points to a plurality of post-movement points in S130, assuming that the moving object moves according to a preset trajectory and does not rotate. Is configured to estimate a base matrix representing a matrix of
- control unit 10 is configured to estimate the posture of one or a plurality of cameras 21 according to the basic matrix in S140.
- the rough posture of the camera 21 can be estimated by simple processing. Further, according to such a configuration, even when the feature points on the road surface cannot be accurately extracted as in the case where the contrast of the road surface is small, the feature points can be extracted from the structure or the like. The accuracy when estimating the camera posture can be improved.
- step S150 the control unit 10 uses, as a road surface front point, a feature point on the road surface on which the moving object travels from at least one of the repeatedly acquired captured images.
- a feature point corresponding to a road surface front point is extracted as a plurality of road surface rear points from a captured image acquired later in time series than the captured image from which the road surface front point is extracted. .
- control unit 10 is configured to estimate the height of the one or more cameras 21 relative to the road surface and the rotation angle relative to the road surface using the basic matrix, the road surface front point, and the road surface rear point. .
- the height with respect to the road surface and the rotation angle with respect to the road surface are estimated using the feature points on the road surface.
- the estimation accuracy can be improved.
- control unit 10 repeatedly repeats at least the processes of S120 to S140 three or more times to change the captured image from which the feature points are extracted, and repeatedly the posture of one or more cameras 21 Is configured to estimate
- control unit 10 is configured to perform filtering on a plurality of posture estimation results and output the filtered values as the postures of one or a plurality of cameras 21.
- the configuration includes the plurality of cameras 21.
- the configuration may include only one camera.
- the postures of the plurality of cameras 21 are individually estimated using the captured images obtained from the plurality of cameras 21, but the present invention is not limited to this.
- the height of a camera whose posture is to be estimated may be estimated using the height of another camera or the like. This may be because the positional relationship between the camera whose posture is to be estimated and other cameras is often set in advance.
- the control unit 10 uses the height from the road surface for the cameras 21 other than the camera 21 to estimate the height with respect to the road surface in S160. You may be comprised so that the height with respect to a road surface may be estimated. In particular, in S160, the control unit 10 determines the height from the road surface for the camera 21 having the largest ratio of the road surface in the captured image among the cameras 21 other than the camera 21 that is to estimate the height with respect to the road surface. It may be configured to estimate the height of one or more cameras 21 with respect to the road surface.
- the height of the other camera 21 from the road surface can be used. It can be simplified.
- the height from the road surface about the camera 21 with the largest ratio of the road surface in the captured image among the cameras 21 other than the camera 21 to estimate the height with respect to the road surface is obtained. Therefore, it is possible to accurately estimate the height from the road surface using the height from the camera 21 that is highly likely to have been obtained.
- a plurality of functions of one constituent element in the embodiment may be realized by a plurality of constituent elements, or a single function of one constituent element may be realized by a plurality of constituent elements. . Further, a plurality of functions possessed by a plurality of constituent elements may be realized by one constituent element, or one function realized by a plurality of constituent elements may be realized by one constituent element. Moreover, you may abbreviate
- non-transitional actual recording such as a device that is a component of the display system 1, a program for causing a computer to function as the display system 1, and a semiconductor memory that records the program
- the present disclosure can also be realized in various forms such as a medium and a posture estimation method.
- control unit 10 corresponds to the posture estimation device according to the present disclosure.
- process of S110 among the processes executed by the control unit 10 corresponds to an image acquisition unit referred to in the present disclosure
- process of S120 corresponds to a feature point extraction unit referred to in the present disclosure.
- the process of S130 among the processes executed by the control unit 10 corresponds to a basic estimation unit in the present disclosure
- the process of S140 corresponds to an attitude estimation unit and a first estimation unit in the present disclosure
- the process of S150 corresponds to a travel point extraction unit as referred to in the present disclosure
- the process of S160 corresponds to a second estimation unit referred to in the present disclosure
- the process of S230 corresponds to the filtering unit referred to in the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Selon un aspect, l'invention concerne dispositif d'estimation d'attitude qui est pourvu d'une unité d'acquisition d'image (S110), d'une unité d'extraction de point caractéristique (S120), d'une unité d'estimation de base (S130) et d'une unité d'estimation d'attitude (S140). L'unité d'extraction de point caractéristique est configurée pour extraire une pluralité de points caractéristiques en tant que pluralité de points pré-déplacement, à partir d'au moins une image capturée qui est acquise de façon répétée, et pour extraire une pluralité de points caractéristiques correspondant à la pluralité de points pré-déplacement, en tant que pluralité de points post-déplacement, à partir de l'intérieur d'une image capturée acquise chronologiquement plus tard que l'image capturée à partir de laquelle la pluralité de points pré-déplacement ont été extraits. L'unité d'estimation de base est configurée pour estimer une matrice de base, qui est une matrice destinée à amener la pluralité de points pré-déplacement à migrer vers la pluralité de points post-déplacement, et qui représente une matrice lorsqu'un objet mobile est supposé se déplacer le long d'une trajectoire prédéfinie sans tourner. L'unité d'estimation d'attitude est configurée pour estimer l'attitude d'une ou de plusieurs unités de capture d'image conformément à la matrice de base.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018-019011 | 2018-02-06 | ||
| JP2018019011A JP6947066B2 (ja) | 2018-02-06 | 2018-02-06 | 姿勢推定装置 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019156072A1 true WO2019156072A1 (fr) | 2019-08-15 |
Family
ID=67549662
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/004065 Ceased WO2019156072A1 (fr) | 2018-02-06 | 2019-02-05 | Dispositif d'estimation d'attitude |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP6947066B2 (fr) |
| WO (1) | WO2019156072A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114782447A (zh) * | 2022-06-22 | 2022-07-22 | 小米汽车科技有限公司 | 路面检测方法、装置、车辆、存储介质及芯片 |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7237773B2 (ja) * | 2019-08-23 | 2023-03-13 | 株式会社デンソーテン | 姿勢推定装置、異常検出装置、補正装置、および、姿勢推定方法 |
| JP7297595B2 (ja) * | 2019-08-23 | 2023-06-26 | 株式会社デンソーテン | 姿勢推定装置、異常検出装置、補正装置、および、姿勢推定方法 |
| JP7256734B2 (ja) * | 2019-12-12 | 2023-04-12 | 株式会社デンソーテン | 姿勢推定装置、異常検出装置、補正装置、および、姿勢推定方法 |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS623663A (ja) * | 1985-06-28 | 1987-01-09 | Agency Of Ind Science & Technol | 自己移動状態抽出方法 |
| JP2014101075A (ja) * | 2012-11-21 | 2014-06-05 | Fujitsu Ltd | 画像処理装置、画像処理方法及びプログラム |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4868964B2 (ja) * | 2006-07-13 | 2012-02-01 | 三菱ふそうトラック・バス株式会社 | 走行状態判定装置 |
-
2018
- 2018-02-06 JP JP2018019011A patent/JP6947066B2/ja active Active
-
2019
- 2019-02-05 WO PCT/JP2019/004065 patent/WO2019156072A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS623663A (ja) * | 1985-06-28 | 1987-01-09 | Agency Of Ind Science & Technol | 自己移動状態抽出方法 |
| JP2014101075A (ja) * | 2012-11-21 | 2014-06-05 | Fujitsu Ltd | 画像処理装置、画像処理方法及びプログラム |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114782447A (zh) * | 2022-06-22 | 2022-07-22 | 小米汽车科技有限公司 | 路面检测方法、装置、车辆、存储介质及芯片 |
| CN114782447B (zh) * | 2022-06-22 | 2022-09-09 | 小米汽车科技有限公司 | 路面检测方法、装置、车辆、存储介质及芯片 |
Also Published As
| Publication number | Publication date |
|---|---|
| JP6947066B2 (ja) | 2021-10-13 |
| JP2019139290A (ja) | 2019-08-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6612297B2 (ja) | 道路の垂直輪郭検出 | |
| JP4882571B2 (ja) | 車両用監視装置 | |
| JP6184447B2 (ja) | 推定装置及び推定プログラム | |
| US20120327189A1 (en) | Stereo Camera Apparatus | |
| WO2019156072A1 (fr) | Dispositif d'estimation d'attitude | |
| JP6316976B2 (ja) | 車載画像認識装置 | |
| CN102549631A (zh) | 车辆周围监视装置 | |
| JP6377970B2 (ja) | 視差画像生成装置及び視差画像生成方法 | |
| US9902341B2 (en) | Image processing apparatus and image processing method including area setting and perspective conversion | |
| JP2020060550A (ja) | 異常検出装置、異常検出方法、姿勢推定装置、および、移動体制御システム | |
| JP2010226652A (ja) | 画像処理装置、画像処理方法、および、コンピュータプログラム | |
| JP2012252501A (ja) | 走行路認識装置及び走行路認識用プログラム | |
| JP6032141B2 (ja) | 走行路面標示検知装置および走行路面標示検知方法 | |
| JP2009059132A (ja) | 物体認識装置 | |
| CN111316322B (zh) | 路面区域检测装置 | |
| JP2018136739A (ja) | キャリブレーション装置 | |
| JP7311407B2 (ja) | 姿勢推定装置、および、姿勢推定方法 | |
| CN111260538B (zh) | 基于长基线双目鱼眼相机的定位及车载终端 | |
| JP7247772B2 (ja) | 情報処理装置及び運転支援システム | |
| CN113170057A (zh) | 摄像部控制装置 | |
| JP2020087210A (ja) | キャリブレーション装置及びキャリブレーション方法 | |
| JP2015001812A (ja) | 画像処理装置、画像処理方法、および画像処理プログラム | |
| JP7169227B2 (ja) | 異常検出装置、および異常検出方法 | |
| KR101949349B1 (ko) | 어라운드 뷰 모니터링 장치 | |
| JP2025035898A (ja) | 俯角補正装置、俯角補正方法、及び俯角補正プログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19750762 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19750762 Country of ref document: EP Kind code of ref document: A1 |