[go: up one dir, main page]

CN110612431B - Cross-field of view for autonomous vehicle systems - Google Patents

Cross-field of view for autonomous vehicle systems Download PDF

Info

Publication number
CN110612431B
CN110612431B CN201880030715.8A CN201880030715A CN110612431B CN 110612431 B CN110612431 B CN 110612431B CN 201880030715 A CN201880030715 A CN 201880030715A CN 110612431 B CN110612431 B CN 110612431B
Authority
CN
China
Prior art keywords
camera
view
vehicle
optical axis
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201880030715.8A
Other languages
Chinese (zh)
Other versions
CN110612431A (en
Inventor
G.斯坦
O.埃唐
E.贝尔曼
M.卡蒂埃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mobileye Vision Technologies Ltd
Original Assignee
Mobileye Vision Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mobileye Vision Technologies Ltd filed Critical Mobileye Vision Technologies Ltd
Publication of CN110612431A publication Critical patent/CN110612431A/en
Application granted granted Critical
Publication of CN110612431B publication Critical patent/CN110612431B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60SSERVICING, CLEANING, REPAIRING, SUPPORTING, LIFTING, OR MANOEUVRING OF VEHICLES, NOT OTHERWISE PROVIDED FOR
    • B60S1/00Cleaning of vehicles
    • B60S1/02Cleaning windscreens, windows or optical devices
    • B60S1/04Wipers or the like, e.g. scrapers
    • B60S1/06Wipers or the like, e.g. scrapers characterised by the drive
    • B60S1/08Wipers or the like, e.g. scrapers characterised by the drive electrically driven
    • B60S1/0818Wipers or the like, e.g. scrapers characterised by the drive electrically driven including control systems responsive to external conditions, e.g. by detection of moisture, dirt or the like
    • B60S1/0822Wipers or the like, e.g. scrapers characterised by the drive electrically driven including control systems responsive to external conditions, e.g. by detection of moisture, dirt or the like characterized by the arrangement or type of detection means
    • B60S1/0833Optical rain sensor
    • B60S1/0844Optical rain sensor including a camera
    • B60S1/0848Cleaning devices for cameras on vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/14Adaptive cruise control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/51Housings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0001Arrangements for holding or mounting articles, not otherwise provided for characterised by position
    • B60R2011/0003Arrangements for holding or mounting articles, not otherwise provided for characterised by position inside the vehicle
    • B60R2011/0026Windows, e.g. windscreen
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R2011/0042Arrangements for holding or mounting articles, not otherwise provided for characterised by mounting means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

An imaging system for a vehicle is provided. In an embodiment, an imaging system includes an imaging module, a first camera coupled to the imaging module, a second camera coupled to the imaging module, and a mounting assembly configured to attach the imaging module to a vehicle such that the first and second cameras face outward relative to the vehicle. The first camera has a first field of view and a first optical axis, and the second camera has a second field of view and a second optical axis. The first optical axis intersects the second optical axis at least one intersection point of the intersecting planes. The first camera is focused at a first horizontal distance beyond the intersection of the intersecting planes and the second camera is focused at a second horizontal distance beyond the intersection of the intersecting planes.

Description

自主车辆系统的交叉视场Cross Field of View for Autonomous Vehicle Systems

相关申请的交叉引用Cross References to Related Applications

本申请要求于2017年5月10日提交的美国临时专利申请第62/504504号的优先权,该申请的全部内容通过引用合并于此。This application claims priority to U.S. Provisional Patent Application No. 62/504504, filed May 10, 2017, which is hereby incorporated by reference in its entirety.

技术领域technical field

本公开总体上涉及自主车辆的相机系统。在另一方面,本公开总体上涉及具有交叉视场的相机系统。The present disclosure relates generally to camera systems for autonomous vehicles. In another aspect, the present disclosure generally relates to camera systems with crossed fields of view.

背景技术Background technique

自主车辆可能需要考虑多种因素,并基于这些因素做出适当的决策,以安全准确地到达预定目的地。例如,为了导航到目的地,自主车辆可能还需要识别其在道路内的位置(例如,在多车道道路内的特定车道),与其他车辆并排导航,避开障碍物和行人,观察交通信号和标志,以及在适当的十字路口或交汇处从一条道路行驶到另一条道路。在车辆行驶到其目的地时,利用和解释自主车辆收集的大量信息提出了许多设计挑战。自主车辆可能需要分析、访问和/或存储的大量数据(例如,捕获的图像数据、地图数据、GPS数据、传感器数据等)提出了可能实际上限制甚至不利地影响自主导航的挑战。例如,作为收集的数据的一部分,自主车辆可能需要处理和解释视觉信息(例如,从位于车辆上的离散位置处的多个相机捕获的信息)。每个相机会具有特定的视场。在一起使用多个相机的情况下,相机的视场在某些情况下可能会重叠和/或冗余。An autonomous vehicle may need to consider multiple factors and make appropriate decisions based on these factors in order to reach the intended destination safely and accurately. For example, in order to navigate to a destination, an autonomous vehicle may also need to recognize its position within a road (e.g., a particular lane within a multi-lane road), navigate alongside other vehicles, avoid obstacles and pedestrians, observe traffic signals and signs, and traveling from one road to another at appropriate intersections or junctions. Utilizing and interpreting the vast amount of information collected by an autonomous vehicle as the vehicle travels to its destination presents many design challenges. The vast amount of data that autonomous vehicles may need to analyze, access, and/or store (eg, captured image data, map data, GPS data, sensor data, etc.) presents challenges that may actually limit or even adversely affect autonomous navigation. For example, an autonomous vehicle may need to process and interpret visual information (eg, information captured from multiple cameras located at discrete locations on the vehicle) as part of the collected data. Each camera will have a specific field of view. Where multiple cameras are used together, the fields of view of the cameras may overlap and/or be redundant in some cases.

发明内容Contents of the invention

与本公开一致的实施例提供了用于自主车辆导航的系统和方法。所公开的实施例可以使用相机来提供自主车辆导航特征。例如,与所公开的实施例一致,所公开的系统可以包括监视车辆环境的一个、两个、三个或更多个相机。每个相机的视场可能与另一个相机甚至多个相机重叠。所公开的系统可以基于例如对由一个或多个相机捕获的图像的分析来提供导航响应。导航响应还可以考虑其他数据,包括例如全球定位系统(GPS)数据、传感器数据(例如,来自加速度计、速度传感器、悬架传感器等)和/或其他地图数据。Embodiments consistent with the present disclosure provide systems and methods for autonomous vehicle navigation. The disclosed embodiments may use cameras to provide autonomous vehicle navigation features. For example, consistent with the disclosed embodiments, the disclosed system may include one, two, three or more cameras monitoring the vehicle environment. Each camera's field of view may overlap another camera or even multiple cameras. The disclosed system may provide navigational responses based, for example, on analysis of images captured by one or more cameras. Navigation responses may also take into account other data including, for example, global positioning system (GPS) data, sensor data (eg, from accelerometers, speed sensors, suspension sensors, etc.), and/or other map data.

在一实施例中,提供了一种用于车辆的成像系统。成像系统可以包括成像模块和联接到成像模块的第一相机。第一相机可以具有第一视场和第一光轴。成像系统还可包括联接到成像模块的第二相机。第二相机可以具有第二视场和第二光轴。成像系统还可包括安装组件,其配置成将成像模块附接到车辆,使得第一相机和第二相机相对于车辆面向外部。另外,第一光轴可以在交叉平面的至少一个交叉点与第二光轴交叉。此外,第一相机可以在超过交叉平面的交叉点的第一水平距离处聚焦,第二相机可以聚焦在超过交叉平面的交叉点的第二水平距离处聚焦;并且,第一视场和第二视场可以形成组合视场。In one embodiment, an imaging system for a vehicle is provided. The imaging system can include an imaging module and a first camera coupled to the imaging module. The first camera can have a first field of view and a first optical axis. The imaging system may also include a second camera coupled to the imaging module. The second camera can have a second field of view and a second optical axis. The imaging system may further include a mount assembly configured to attach the imaging module to the vehicle such that the first camera and the second camera face outward relative to the vehicle. Additionally, the first optical axis may intersect the second optical axis at at least one intersection point of the intersecting plane. Additionally, the first camera may be focused at a first horizontal distance beyond the intersection of the intersecting planes and the second camera may be focused at a second horizontal distance beyond the intersection of the intersecting planes; and, the first field of view and the second Fields of view can form combined fields of view.

在一实施例中,提供了一种用于车辆的成像系统。成像系统可以包括成像模块和联接到成像模块的第一相机。第一相机可以具有第一视场和第一光轴。成像系统还可包括联接到成像模块的第二相机。第二相机可以具有第二视场和第二光轴。成像系统还可以包括联接到成像模块的第三相机。第三相机可以具有第三视场和第三光轴。成像系统还可包括安装组件,其配置成将成像模块附接到车辆的内部窗户,使得第一相机、第二相机和第三相机相对于车辆面向外部。另外,第一光轴可以在第一交叉平面的至少一个第一交叉点与第二光轴交叉,第一光轴可以在第二交叉平面的至少一个第二交叉点与第三光轴交叉,并且第二光轴可以在第三交叉平面的至少一个第三交叉点与第三光轴交叉。此外,第一视场、第二视场和第三视场可以形成组合视场。In one embodiment, an imaging system for a vehicle is provided. The imaging system can include an imaging module and a first camera coupled to the imaging module. The first camera can have a first field of view and a first optical axis. The imaging system may also include a second camera coupled to the imaging module. The second camera can have a second field of view and a second optical axis. The imaging system may also include a third camera coupled to the imaging module. The third camera may have a third field of view and a third optical axis. The imaging system may also include a mount assembly configured to attach the imaging module to an interior window of the vehicle such that the first camera, the second camera, and the third camera face outward relative to the vehicle. In addition, the first optical axis may intersect the second optical axis at at least one first intersection point of the first intersection plane, the first optical axis may intersect the third optical axis at at least one second intersection point of the second intersection plane, And the second optical axis may intersect the third optical axis at at least one third intersection point of the third intersection plane. In addition, the first field of view, the second field of view and the third field of view may form a combined field of view.

在一实施例中,提供一种用于车辆的成像系统。成像系统可以包括成像模块,其配置成沿着半圆弧布置多个相机。多个相机可以朝向半圆的半径定向。成像系统还可包括安装组件,其配置成将成像模块附接到车辆的内部窗户,使得多个相机相对于车辆面向外部。另外,多个相机中的每个相应的相机可以具有相应的视场和从单个相对较小且透明开口向外伸出的相应的光轴。此外,每个相应的视场至少部分地与半圆的半径重叠,并且半圆的半径位于单个相对较小且透明开口的中心位置。此外,每个相应的光轴在相应交叉平面的至少一个相应交叉点与每个其他相应的光轴交叉,并且所有相应的视场形成组合视场。In one embodiment, an imaging system for a vehicle is provided. The imaging system may include an imaging module configured to arrange a plurality of cameras along a semi-circular arc. Multiple cameras may be oriented toward the radius of the semicircle. The imaging system may also include a mount assembly configured to attach the imaging module to an interior window of the vehicle such that the plurality of cameras faces outward relative to the vehicle. Additionally, each respective camera of the plurality of cameras may have a respective field of view and a respective optical axis projecting outwardly from a single relatively small and transparent opening. In addition, each respective field of view at least partially overlaps the radius of the semicircle, and the radius of the semicircle is centered within a single relatively small transparent opening. Furthermore, each respective optical axis intersects each other respective optical axis at at least one respective intersection point of a respective intersection plane, and all respective fields of view form a combined field of view.

前面的一般描述和下面的详细描述仅是示例性和说明性的,并不限制权利要求。The foregoing general description and the following detailed description are exemplary and explanatory only and do not limit the claims.

附图说明Description of drawings

附图并入本公开并构成本公开一部分,其示出了各个公开的实施例。在图中:The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate various disclosed embodiments. In the picture:

图1是与所公开的实施例一致的示例性系统的示意性表示。FIG. 1 is a schematic representation of an exemplary system consistent with disclosed embodiments.

图2A是包括与所公开的实施例一致的系统的示例性车辆的示意性侧视图表示。FIG. 2A is a schematic side view representation of an exemplary vehicle including a system consistent with disclosed embodiments.

图2B是与所公开的实施例一致的图2A所示的车辆和系统的示意性俯视图表示。FIG. 2B is a schematic top view representation of the vehicle and system shown in FIG. 2A consistent with disclosed embodiments.

图2C是包括与所公开的实施例一致的系统的车辆的另一实施例的示意性俯视图表示。2C is a schematic top view representation of another embodiment of a vehicle including a system consistent with disclosed embodiments.

图2D是包括与所公开的实施例一致的系统的车辆的又一实施例的示意性俯视图表示。2D is a schematic top view representation of yet another embodiment of a vehicle including a system consistent with disclosed embodiments.

图2E是包括与所公开的实施例一致的系统的车辆的又一实施例的示意性俯视图表示。2E is a schematic top view representation of yet another embodiment of a vehicle including a system consistent with disclosed embodiments.

图2F是与所公开的实施例一致的示例性车辆控制系统的示意性表示。FIG. 2F is a schematic representation of an exemplary vehicle control system consistent with disclosed embodiments.

图3A是与所公开的实施例一致的包括后视镜和用于车辆成像系统的用户界面的车辆的内部的示意性表示。3A is a schematic representation of the interior of a vehicle including a rearview mirror and a user interface for a vehicle imaging system, consistent with disclosed embodiments.

图3B是与所公开的实施例一致的配置成定位在后视镜后面并且紧靠车辆挡风玻璃的相机安装座的示例的图示。3B is an illustration of an example of a camera mount configured to be positioned behind a rearview mirror and against a vehicle windshield, consistent with disclosed embodiments.

图3C是与所公开的实施例一致的从不同角度观看的图3B所示的相机安装座的图示。3C is an illustration of the camera mount shown in FIG. 3B viewed from a different angle, consistent with disclosed embodiments.

图3D是与所公开的实施例一致的配置成定位在后视镜后面并且紧靠车辆挡风玻璃的相机安装座的示例的图示。3D is an illustration of an example of a camera mount configured to be positioned behind a rearview mirror and against a vehicle windshield, consistent with disclosed embodiments.

图4是与所公开的实施例一致的配置成存储用于执行一个或多个操作的指令的存储器的示例性框图。4 is an exemplary block diagram of a memory configured to store instructions for performing one or more operations, consistent with disclosed embodiments.

图5A是示出了与所公开的实施例一致的用于基于单眼图像分析来引起一个或多个导航响应的示例性过程的流程图。5A is a flowchart illustrating an example process for eliciting one or more navigational responses based on monocular image analysis, consistent with disclosed embodiments.

图5B是示出了与所公开的实施例一致的用于在一组图像中检测一个或多个车辆和/或行人的示例性过程的流程图。5B is a flowchart illustrating an example process for detecting one or more vehicles and/or pedestrians in a set of images, consistent with the disclosed embodiments.

图5C是示出了与所公开的实施例一致的用于在一组图像中检测道路标记和/或车道几何形状信息的示例性过程的流程图。5C is a flowchart illustrating an exemplary process for detecting road markings and/or lane geometry information in a set of images, consistent with the disclosed embodiments.

图5D是示出了与所公开的实施例一致的用于在一组图像中检测交通信号灯的示例性过程的流程图。5D is a flowchart illustrating an exemplary process for detecting traffic lights in a set of images consistent with disclosed embodiments.

图5E是示出了与所公开的实施例一致的用于基于车辆路径来引起一个或多个导航响应的示例性过程的流程图。FIG. 5E is a flowchart illustrating an example process for eliciting one or more navigational responses based on a vehicle path, consistent with disclosed embodiments.

图5F是示出了与所公开的实施例一致的用于确定领先车辆是否正在改变车道的示例性过程的流程图。FIG. 5F is a flowchart illustrating an example process for determining whether a lead vehicle is changing lanes, consistent with disclosed embodiments.

图6是示出了与所公开的实施例一致的用于基于立体图像分析来引起一个或多个导航响应的示例性过程的流程图。6 is a flowchart illustrating an example process for eliciting one or more navigational responses based on stereoscopic image analysis, consistent with disclosed embodiments.

图7是示出了与所公开的实施例一致的基于三组图像的分析来引起一个或多个导航响应的示例性过程的流程图。7 is a flowchart illustrating an exemplary process for eliciting one or more navigational responses based on analysis of three sets of images, consistent with disclosed embodiments.

图8是与所公开的实施例一致的成像系统的实施例的示意性表示。Figure 8 is a schematic representation of an embodiment of an imaging system consistent with the disclosed embodiments.

图9是单个相机宽视场成像系统的示意图。9 is a schematic diagram of a single camera wide field of view imaging system.

图10是与所公开的实施例一致的成像系统的另一实施例的示意性表示。Figure 10 is a schematic representation of another embodiment of an imaging system consistent with the disclosed embodiments.

图11A是与所公开的实施例一致的具有组合视场的示例性成像系统的示意性平面视图表示。11A is a schematic plan view representation of an exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图11B是与所公开的实施例一致的具有组合视场的示例性成像系统的示意性平面视图表示。11B is a schematic plan view representation of an exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图12A是与所公开的实施例一致的具有组合视场的另一示例性成像系统的示意性平面视图表示。12A is a schematic plan view representation of another exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图12B是与所公开的实施例一致的具有组合视场的另一示例性成像系统的示意性平面视图表示。12B is a schematic plan view representation of another exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图13是与所公开的实施例一致的具有组合视场的另一示例性成像系统的透视图表示。13 is a perspective view representation of another exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图14是与所公开的实施例一致的具有组合视场的另一示例性成像系统的透视图表示。14 is a perspective view representation of another exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图15是与所公开的实施例一致的具有组合视场的另一示例性成像系统的透视图表示。15 is a perspective view representation of another exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图16是与所公开的实施例一致的图15的成像系统的透视图表示。16 is a perspective view representation of the imaging system of FIG. 15 consistent with disclosed embodiments.

图17是与所公开的实施例一致的具有组合视场的另一示例性成像系统的透视图表示。17 is a perspective view representation of another exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图18是与所公开的实施例一致的图17的成像系统的透视图表示。18 is a perspective view representation of the imaging system of FIG. 17 consistent with disclosed embodiments.

图19是与所公开的实施例一致的具有组合视场的另一示例性成像系统的示意性平面视图表示。19 is a schematic plan view representation of another exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图20是与所公开的实施例一致的具有组合视场的另一示例性成像系统的示意性平面视图表示。20 is a schematic plan view representation of another exemplary imaging system having a combined field of view consistent with disclosed embodiments.

图21是与所公开的实施例一致的另一示例性成像系统的正视图表示。21 is a front view representation of another exemplary imaging system consistent with the disclosed embodiments.

图22是包括与所公开的实施例一致的成像系统的示例性车辆的透视图。22 is a perspective view of an exemplary vehicle including an imaging system consistent with disclosed embodiments.

图23是包括与所公开的实施例一致的成像系统的示例性车辆的侧视图。23 is a side view of an exemplary vehicle including an imaging system consistent with disclosed embodiments.

图24是与所公开的实施例一致的图23的示意性平面视图表示。FIG. 24 is a schematic plan view representation of FIG. 23 consistent with disclosed embodiments.

图25是与所公开的实施例一致的成像系统的示意性平面视图表示。25 is a schematic plan view representation of an imaging system consistent with disclosed embodiments.

具体实施方式Detailed ways

以下详细描述参考附图。在可能的情况下,在附图和以下描述中使用相同的附图标记指代相同或相似的部件。尽管这里描述了多个说明性实施例,但是修改、改变及其他实施方式也是可能的。例如,可以对附图中示出的部件进行替换、添加或修改,并且可以通过替代、重新排序、移除或添加所公开的方法的步骤来修改本文描述的说明性方法。因此,以下详细描述不限于所公开的实施例和示例。相反,适当的范围由所附权利要求限定。The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers will be used in the drawings and the following description to refer to the same or like parts. While a number of illustrative embodiments have been described herein, modifications, changes, and other implementations are possible. For example, substitutions, additions, or modifications may be made to components shown in the figures, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps of the disclosed methods. Therefore, the following detailed description is not limited to the disclosed embodiments and examples. Rather, the proper scope is defined by the appended claims.

自主车辆概述Autonomous Vehicle Overview

如在整个本公开中所使用,术语“自主车辆”是指能够在没有驾驶员输入的情况下实施至少一个导航变化的车辆。“导航变化”是指车辆的转向、制动或加速中的一个或多个的变化。为了实现自主,车辆不需要是全自动的(例如在没有驾驶员或没有驾驶员输入的情况下完全操作)。相反,自主车辆包括可以在某些时间段内在驾驶员控制下操作而在其他时间段内没有驾驶员控制的那些车辆。自主车辆还可以包括仅控制车辆导航的某些方面比如转向(例如以在车辆车道约束之间维持车辆路线)的车辆,但是可以将其他方面留给驾驶员(例如制动)。在某些情况下,自主车辆可以处理车辆的制动、速度控制和/或转向的一些或所有方面。As used throughout this disclosure, the term "autonomous vehicle" refers to a vehicle capable of implementing at least one navigational change without driver input. "Navigation change" refers to a change in one or more of steering, braking, or acceleration of a vehicle. To be autonomous, a vehicle does not need to be fully autonomous (eg, operate fully without a driver or driver input). In contrast, autonomous vehicles include those vehicles that can operate under driver control during certain periods of time and without driver control during other periods of time. Autonomous vehicles may also include vehicles that only control certain aspects of vehicle navigation, such as steering (eg, to maintain the vehicle's path between vehicle lane constraints), but may leave other aspects to the driver (eg, braking). In some cases, an autonomous vehicle may handle some or all aspects of the vehicle's braking, speed control, and/or steering.

由于人类驾驶员通常依靠视觉提示和观察命令来控制车辆,因此相应地构建了交通运输基础设施,车道标记、交通标志和交通信号灯全都设计成向驾驶员提供视觉信息。考虑到交通运输基础设施的这些设计特征,自主车辆可以包括相机和处理单元,该处理单元分析从车辆的环境捕获的视觉信息。视觉信息可包括例如驾驶员可观察到的交通运输基础设施的组成部分(例如车道标记、交通标志、交通信号灯等)及其他障碍物(例如其他车辆、行人、垃圾等)。此外,自主车辆还可以使用存储的信息,比如在导航时提供车辆环境的模型的信息。例如,车辆可以使用GPS数据、传感器数据(例如来自加速度计、速度传感器、悬架传感器等)和/或其他地图数据来提供与车辆行驶时其环境有关的信息,并且车辆(以及其他车辆)可以使用该信息来将自身定位在模型上。Because human drivers typically rely on visual cues and observation commands to control vehicles, transportation infrastructure has been built accordingly, with lane markings, traffic signs, and traffic lights all designed to provide visual information to the driver. Given these design features of transportation infrastructure, autonomous vehicles may include cameras and a processing unit that analyzes visual information captured from the vehicle's environment. Visual information may include, for example, components of the transportation infrastructure (eg, lane markings, traffic signs, traffic lights, etc.) and other obstacles (eg, other vehicles, pedestrians, trash, etc.) observable to the driver. In addition, autonomous vehicles can also use stored information, such as information that provides a model of the vehicle's environment when navigating. For example, a vehicle may use GPS data, sensor data (e.g., from accelerometers, speed sensors, suspension sensors, etc.), and/or other map data to provide information about its environment while the vehicle is in motion, and the vehicle (and other vehicles) may Use this information to position yourself on the model.

在本公开的一些实施例中,自主车辆可以使用在导航时获得的信息(例如来自相机、GPS设备、加速度计、速度传感器、悬架传感器等)。在其他实施例中,自主车辆可以在导航时使用从车辆(或其他车辆)的过去导航获得的信息。在其他实施例中,自主车辆可以使用在导航时获得的信息和从过去导航获得的信息的组合。以下各部分提供与所公开的实施例一致的系统的概述,随后是前向成像系统和与该系统一致的方法的概述。以下各部分公开了用于构造、使用和更新稀疏地图以用于自主车辆导航的系统及方法。In some embodiments of the present disclosure, an autonomous vehicle may use information obtained while navigating (eg, from cameras, GPS devices, accelerometers, speed sensors, suspension sensors, etc.). In other embodiments, the autonomous vehicle may use information obtained from past navigations of the vehicle (or other vehicles) when navigating. In other embodiments, the autonomous vehicle may use a combination of information obtained while navigating and information obtained from past navigations. The following sections provide an overview of a system consistent with the disclosed embodiments, followed by an overview of a forward imaging system and a method consistent with the system. The following sections disclose systems and methods for constructing, using, and updating sparse maps for autonomous vehicle navigation.

如在整个本公开中所使用,术语“视场”是指相机能够在三个维度上观看的总区域。当本公开描述参考单个角度的视场时,该单个角度是指二维的水平视场。如在整个本公开中所使用,术语“光轴”是指相机视场的中心轴线。换句话说,“光轴”是相机可见区域的投影中心处的矢量。换句话说,“光轴”是相机视场对称定向所围绕的轴线。As used throughout this disclosure, the term "field of view" refers to the total area that a camera is able to view in three dimensions. When this disclosure describes a field of view with reference to a single angle, that single angle refers to a two-dimensional horizontal field of view. As used throughout this disclosure, the term "optical axis" refers to the central axis of a camera's field of view. In other words, the "optical axis" is the vector at the projected center of the camera's visible area. In other words, the "optical axis" is the axis about which the camera's field of view is symmetrically oriented.

系统概述System Overview

图1是与示例性公开的实施例一致的系统100的框图表示。取决于特定实施方式的要求,系统100可以包括各种部件。在一些实施例中,系统100可以包括处理单元110、图像获取单元120、位置传感器130、一个或多个存储单元140、150、地图数据库160、用户界面170和无线收发器172。处理单元110可以包括一个或多个处理设备。在一些实施例中,处理单元110可以包括应用处理器180、图像处理器190或任何其他合适的处理设备。类似地,取决于特定应用的要求,图像获取单元120可以包括任何数量的图像获取设备和部件。在一些实施例中,图像获取单元120可以包括一个或多个图像捕获设备(例如相机),比如图像捕获设备122、图像捕获设备124和图像捕获设备126。系统100还可以包括将处理设备110通信地连接到图像获取设备120的数据接口128。例如,数据接口128可以包括任何一个或多个有线和/或无线链路,用于将由图像获取设备120获取的图像数据传输到处理单元110。FIG. 1 is a block diagram representation of a system 100 consistent with an exemplary disclosed embodiment. System 100 may include various components depending on the requirements of a particular implementation. In some embodiments, system 100 may include a processing unit 110 , an image acquisition unit 120 , a position sensor 130 , one or more storage units 140 , 150 , a map database 160 , a user interface 170 and a wireless transceiver 172 . Processing unit 110 may include one or more processing devices. In some embodiments, the processing unit 110 may include an application processor 180, an image processor 190, or any other suitable processing device. Similarly, image acquisition unit 120 may include any number of image acquisition devices and components depending on the requirements of a particular application. In some embodiments, image acquisition unit 120 may include one or more image capture devices (eg, cameras), such as image capture device 122 , image capture device 124 , and image capture device 126 . System 100 may also include a data interface 128 communicatively connecting processing device 110 to image acquisition device 120 . For example, data interface 128 may include any one or more wired and/or wireless links for transferring image data acquired by image acquisition device 120 to processing unit 110 .

无线收发器172可以包括一个或多个设备,该设备配置成通过使用射频、红外频率、磁场或电场通过空中接口将传输交换到一个或多个网络(例如蜂窝、互联网等)。无线收发器172可以使用任何已知的标准来传输和/或接收数据(例如WiFi、Bluetooth Smart、802.15.4、ZigBee等)。这种传输可以包括从主车辆到一个或多个远程服务器的通信。这种传输还可包括在主车辆与主车辆环境中的一个或多个目标车辆之间的通信(单向或双向)(例如在主车辆的环境中,根据目标车辆或与目标车辆一起促进主车辆的导航协调),甚至包括向传输车辆附近的未指定接收者的广播传输。Wireless transceiver 172 may include one or more devices configured to switch transmissions over the air to one or more networks (eg, cellular, Internet, etc.) through the use of radio frequency, infrared frequencies, magnetic or electric fields. Wireless transceiver 172 may transmit and/or receive data using any known standard (e.g., WiFi, Bluetooth Smart, 802.15.4, ZigBee, etc.). Such transmissions may include communications from the host vehicle to one or more remote servers. Such transfers may also include communications (one-way or two-way) between the host vehicle and one or more target vehicles in the environment of the host vehicle (e.g. navigation coordination of vehicles), even including broadcast transmissions to unspecified recipients in the vicinity of the transmitting vehicle.

应用处理器180和图像处理器190都可以包括各种类型的处理设备。例如,应用处理器180和图像处理器190中的一个或两个可以包括微处理器、预处理器(比如图像预处理器)、图形处理单元(GPU)、中央处理单元(CPU)、支持电路、数字信号处理器、集成电路、存储器或适合运行应用程序以及图像处理和分析的任何其他类型的设备。在一些实施例中,应用处理器180和/或图像处理器190可以包括任何类型的单核或多核处理器、移动设备微控制器、中央处理单元等。可以使用各种处理设备,例如包括可从制造商获得的处理器(比如等)或可从制造商获得的GPU(例如/> 等),并且可以包括各种架构(例如x86处理器、/>等)。Both the application processor 180 and the image processor 190 may include various types of processing devices. For example, one or both of applications processor 180 and image processor 190 may include a microprocessor, a pre-processor (such as an image pre-processor), a graphics processing unit (GPU), a central processing unit (CPU), support circuitry , digital signal processor, integrated circuit, memory, or any other type of device suitable for running applications and image processing and analysis. In some embodiments, application processor 180 and/or image processor 190 may include any type of single or multi-core processor, mobile device microcontroller, central processing unit, or the like. Various processing devices can be used, including, for example, processors available from manufacturers such as etc.) or a GPU available from the manufacturer (e.g. /> etc.), and can include various architectures (e.g. x86 processors, /> wait).

在一些实施例中,应用处理器180和/或图像处理器190可以包括可从获得的任何EyeQ系列处理器芯片。这些处理器设计各自包括具有本地存储器和指令集的多个处理单元。这种处理器可以包括用于从多个图像传感器接收图像数据的视频输入,并且还可以包括视频输出能力。在一示例中,/>使用以332Mhz运行的90nm微米技术。架构包含两个浮点、超线程32位RISC CPU(/>内核)、五个视觉计算引擎(VCE)、三个矢量微码处理器/>Denali 64位移动DDR控制器、128位内部超音速互连、双16位视频输入和18位视频输出控制器、16通道DMA以及多个外围设备。MIPS34KCPU管理着五个VCE、三个VMPTM和DMA、第二MIPS34K CPU和多通道DMA以及其他外围设备。五个VCE、三个/>和MIPS34K CPU可以执行多功能捆绑应用程序所需的密集视觉计算。在另一示例中,作为第三代处理器并且比/>功能强大六倍的/>可以用于所公开的实施例。在其他示例中,/>和/或/>可以用于所公开的实施例。当然,任何更新的或将来的EyeQ处理设备也可以与所公开的实施例一起使用。In some embodiments, application processor 180 and/or image processor 190 may include Get any EyeQ series processor chip. These processor designs each include multiple processing units with local memory and instruction sets. Such processors may include video inputs for receiving image data from multiple image sensors, and may also include video output capabilities. In one example, /> Using 90nm Micron technology running at 332Mhz. Architecture consists of two floating-point, hyper-threaded 32-bit RISC CPUs(/> cores), five Visual Computing Engines (VCEs), three vector microcode processors/> Denali 64-bit mobile DDR controller, 128-bit internal supersonic interconnect, dual 16-bit video input and 18-bit video output controllers, 16-channel DMA, and multiple peripherals. The MIPS34K CPU manages five VCEs, three VMP TMs and DMAs, a second MIPS34K CPU and multi-channel DMA, and other peripherals. Five VCEs, three /> and MIPS34K CPUs can perform the intensive visual computations required for versatile bundled applications. In another example, as a 3rd generation processor and than /> six times more powerful /> can be used with the disclosed embodiments. In other examples, /> and/or /> can be used with the disclosed embodiments. Of course, any newer or future EyeQ processing devices may also be used with the disclosed embodiments.

本文公开的任何处理设备可以配置成执行某些功能。配置处理设备比如所描述的EyeQ处理器或其他控制器或微处理器中的任何一个以执行某些功能可以包括对计算机可执行指令进行编程,并使这些指令可用于处理设备以在处理设备的操作期间执行。在一些实施例中,配置处理设备可以包括直接利用架构指令对处理设备进行编程。例如,可以使用例如一种或多种硬件描述语言(HDL)来配置处理设备,比如现场可编程门阵列(FPGA)、专用集成电路(ASIC)等。Any processing device disclosed herein can be configured to perform certain functions. Configuring a processing device, such as any of the described EyeQ processors or other controllers or microprocessors, to perform certain functions may include programming computer-executable instructions and making these instructions available to the processing device to operate within the processing device. Executed during operation. In some embodiments, configuring the processing device may include programming the processing device directly with architectural instructions. For example, a processing device, such as a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), or the like, may be configured using, for example, one or more hardware description languages (HDL).

在其他实施例中,配置处理设备可以包括将可执行指令存储在操作期间处理设备可访问的存储器上。例如,处理设备可以在操作期间访问存储器以获得并执行所存储的指令。在任一种情况下,配置为执行本文公开的感测、图像分析和/或导航功能的处理设备代表控制主车辆的多个基于硬件的部件的基于专用硬件的系统。In other embodiments, configuring the processing device may include storing executable instructions on memory accessible to the processing device during operation. For example, a processing device may, during operation, access memory to obtain and execute stored instructions. In either case, the processing device configured to perform the sensing, image analysis, and/or navigation functions disclosed herein represents a dedicated hardware-based system that controls multiple hardware-based components of the host vehicle.

尽管图1描绘了包括在处理单元110中的两个单独的处理设备,但是可以使用更多或更少的处理设备。例如,在一些实施例中,单个处理设备可以用于完成应用处理器180和图像处理器190的任务。在其他实施例中,这些任务可以由两个以上的处理设备执行。此外,在一些实施例中,系统100可以包括一个或多个处理单元110,而不包括其他部件,比如图像获取单元120。Although FIG. 1 depicts two separate processing devices included in processing unit 110, more or fewer processing devices may be used. For example, in some embodiments, a single processing device may be used to accomplish the tasks of applications processor 180 and image processor 190 . In other embodiments, these tasks may be performed by more than two processing devices. Furthermore, in some embodiments, the system 100 may include one or more processing units 110 without including other components, such as the image acquisition unit 120 .

处理单元110可以包括各种类型的设备。例如,处理单元110可以包括各种设备,比如控制器、图像预处理器、中央处理单元(CPU)、图形处理单元(GPU)、支持电路、数字信号处理器、集成电路、存储器或用于图像处理和分析的任何其他类型的设备。图像预处理器可以包括视频处理器,用于捕获、数字化和处理来自图像传感器的图像。CPU可以包括任何数量的微控制器或微处理器。GPU还可包括任何数量的微控制器或微处理器。支持电路可以是本领域公知的任何数量的电路,包括高速缓存、电源、时钟和输入输出电路。存储器可以存储在由处理器执行时控制系统操作的软件。存储器可以包括数据库和图像处理软件。存储器可以包括任意数量的随机存取存储器、只读存储器、闪存、磁盘驱动器、光学存储器、磁带存储器、可移动存储器和其他类型的存储器。在一实例中,存储器可以与处理单元110分离。在另一实例中,存储器可以集成到处理单元110中。The processing unit 110 may include various types of devices. For example, the processing unit 110 may include various devices such as a controller, an image pre-processor, a central processing unit (CPU), a graphics processing unit (GPU), support circuits, a digital signal processor, an integrated circuit, memory, or Any other type of equipment for processing and analysis. Image pre-processors may include video processors for capturing, digitizing and processing images from the image sensor. A CPU may include any number of microcontrollers or microprocessors. A GPU may also include any number of microcontrollers or microprocessors. Support circuitry may be any number of circuits known in the art, including cache, power, clock, and input-output circuitry. The memory may store software that controls the operation of the system when executed by the processor. The memory may include databases and image processing software. The memory may include any number of random access memory, read only memory, flash memory, disk drives, optical memory, tape memory, removable memory, and other types of memory. In an example, the memory may be separate from the processing unit 110 . In another example, the memory may be integrated into the processing unit 110 .

每个存储器140、150可以包括在由处理器(例如应用处理器180和/或图像处理器190)执行时可以控制系统100的各个方面的操作的软件指令。这些存储器单元可以包括各种数据库和图像处理软件以及受过训练的系统,例如神经网络或深度神经网络。存储器单元可以包括随机存取存储器(RAM)、只读存储器(ROM)、闪存、磁盘驱动器、光学存储器、磁带存储器、可移动存储器和/或任何其他类型的存储器。在一些实施例中,存储单元140、150可以与应用处理器180和/或图像处理器190分离。在其他实施例中,这些存储单元可以集成到应用处理器180和/或图像处理器190中。Each memory 140 , 150 may include software instructions that, when executed by a processor (eg, applications processor 180 and/or graphics processor 190 ), may control the operation of various aspects of system 100 . These memory units can include various databases and image processing software as well as trained systems such as neural networks or deep neural networks. The memory unit may include random access memory (RAM), read only memory (ROM), flash memory, disk drives, optical storage, magnetic tape storage, removable memory, and/or any other type of memory. In some embodiments, storage units 140 , 150 may be separate from application processor 180 and/or image processor 190 . In other embodiments, these storage units may be integrated into the application processor 180 and/or the image processor 190 .

位置传感器130可以包括适于确定与系统100的至少一个部件相关的位置的任何类型的设备。在一些实施例中,位置传感器130可以包括GPS接收器。这种接收器可以通过处理由全球定位系统卫星广播的信号来确定用户位置和速度。来自位置传感器130的位置信息可被提供给应用处理器180和/或图像处理器190。Position sensor 130 may comprise any type of device suitable for determining a position relative to at least one component of system 100 . In some embodiments, location sensor 130 may include a GPS receiver. Such receivers can determine a user's location and velocity by processing signals broadcast by Global Positioning System satellites. Location information from location sensor 130 may be provided to application processor 180 and/or image processor 190 .

在一些实施例中,系统100可以包括诸如用于测量车辆200的速度的速度传感器(例如转速计、速度计)和/或用于测量车辆200的加速度的加速度计(单轴或多轴)之类的部件。In some embodiments, the system 100 may include, for example, a speed sensor (e.g., tachometer, speedometer) for measuring the speed of the vehicle 200 and/or an accelerometer (single-axis or multi-axis) for measuring the acceleration of the vehicle 200 class components.

用户界面170可以包括适于向系统100的一个或多个用户提供信息或从系统100的一个或多个用户接收输入的任何设备。在一些实施例中,用户界面170可以包括用户输入设备,包括例如触摸屏、麦克风、键盘、指针设备、滚轮、相机、旋钮、按钮等。采用这种输入设备,用户可能能够通过键入指令或信息、提供语音命令、使用按钮、指针或眼睛跟踪功能来选择屏幕上的菜单选项或者通过用于将信息传达到系统100的任何其他合适的技术来向系统100提供信息输入或命令。User interface 170 may include any device suitable for providing information to or receiving input from one or more users of system 100 . In some embodiments, user interface 170 may include user input devices including, for example, touch screens, microphones, keyboards, pointing devices, scroll wheels, cameras, knobs, buttons, and the like. With such an input device, a user may be able to select on-screen menu options by typing instructions or information, providing voice commands, using buttons, pointer or eye-tracking functionality, or through any other suitable technique for communicating information to system 100 to provide information input or commands to the system 100.

用户界面170可以配备有一个或多个处理设备,该设备配置成向用户提供信息或从用户接收信息,并处理该信息以供例如应用处理器180使用。在一些实施例中,这种处理设备可以执行用于识别和跟踪眼动、接收和解释语音命令、识别和解释在触摸屏上做出的触摸和/或手势、响应键盘输入或菜单选择等的指令。在一些实施例中,用户界面170可以包括显示器、扬声器、触觉设备和/或向用户提供输出信息的任何其他设备。User interface 170 may be equipped with one or more processing devices configured to provide information to or receive information from a user and to process the information for use by applications processor 180, for example. In some embodiments, such a processing device may execute instructions for recognizing and tracking eye movements, receiving and interpreting voice commands, recognizing and interpreting touches and/or gestures made on a touch screen, responding to keyboard input or menu selections, etc. . In some embodiments, user interface 170 may include a display, speaker, haptic device, and/or any other device that provides output information to a user.

地图数据库160可以包括用于存储对系统100有用的地图数据的任何类型的数据库。在一些实施例中,地图数据库160可以包括与各个项目在参考坐标系中的位置有关的数据,项目包括道路、水景、地理特征、企业、景点、饭店、加油站等。地图数据库160不仅可以存储这些项目的位置,而且可以存储与这些项目有关的描述符,例如包括与任何所存储的特征相关的名称。在一些实施例中,地图数据库160可以与系统100的其他部件物理地定位。可替代地或另外,地图数据库160或其一部分可以相对于系统100的其他部件(例如处理单元110)远程地定位。在这种实施例中,来自地图数据库160的信息可以通过有线或无线数据连接下载到网络(例如通过蜂窝网络和/或互联网等)。在一些情况下,地图数据库160可以存储稀疏数据模型,该稀疏数据模型包括用于主车辆的某些道路特征(例如车道标记)或目标轨迹的多项式表示。下面参考图8-19讨论生成这种地图的系统和方法。Map database 160 may include any type of database for storing map data useful to system 100 . In some embodiments, the map database 160 may include data related to the location of various items, including roads, water features, geographic features, businesses, attractions, restaurants, gas stations, etc., in a reference coordinate system. Map database 160 may store not only the locations of the items, but also descriptors related to the items, including, for example, names associated with any stored features. In some embodiments, map database 160 may be physically located with other components of system 100 . Alternatively or additionally, map database 160 or a portion thereof may be located remotely relative to other components of system 100 (eg, processing unit 110 ). In such an embodiment, information from map database 160 may be downloaded to a network (eg, via a cellular network and/or the Internet, etc.) via a wired or wireless data connection. In some cases, map database 160 may store a sparse data model that includes polynomial representations for certain road features (eg, lane markings) or object trajectories for the host vehicle. Systems and methods for generating such maps are discussed below with reference to FIGS. 8-19.

图像捕获设备122、124和126每个可以包括适于从环境捕获至少一个图像的任何类型的设备。而且,任何数量的图像捕获设备可以用于获取图像以输入到图像处理器。一些实施例可以仅包括单个图像捕获设备,而其他实施例可以包括两个、三个或者甚至四个或更多个图像捕获设备。下面将参考图2B-2E进一步描述图像捕获设备122、124和126。Image capture devices 122, 124, and 126 may each comprise any type of device suitable for capturing at least one image from the environment. Also, any number of image capture devices may be used to acquire images for input to the image processor. Some embodiments may include only a single image capture device, while other embodiments may include two, three, or even four or more image capture devices. Image capture devices 122, 124, and 126 are further described below with reference to FIGS. 2B-2E.

系统100或其各个部件可以结合到各种不同的平台中。在一些实施例中,如图2A所示,系统100可被包括在车辆200上。例如,车辆200可以配备有处理单元110和系统100的任何其他部件,如以上关于图1所述。尽管在一些实施例中,车辆200可以仅配备有单个图像捕获设备(例如相机),但是在其他实施例中,比如结合图2B-2E讨论的那些实施例,可以使用多个图像捕获设备。例如,如图2A所示,车辆200的图像捕获设备122和124中的任一个都可以是ADAS(高级驾驶员辅助系统)成像设备的一部分。System 100 or its individual components may be incorporated into a variety of different platforms. In some embodiments, the system 100 may be included on a vehicle 200 as shown in FIG. 2A . For example, vehicle 200 may be equipped with processing unit 110 and any other components of system 100 as described above with respect to FIG. 1 . While in some embodiments vehicle 200 may be equipped with only a single image capture device (eg, camera), in other embodiments, such as those discussed in connection with FIGS. 2B-2E , multiple image capture devices may be used. For example, as shown in FIG. 2A , either of the image capture devices 122 and 124 of the vehicle 200 may be part of an ADAS (Advanced Driver Assistance System) imaging device.

包括在车辆200上作为图像获取单元120的一部分的图像捕获设备可以位于任何合适的位置。在一些实施例中,如图2A-2E和3A-3C所示,图像捕获设备122可以位于后视镜附近。该位置可以提供与车辆200的驾驶员相似的视线,这可以帮助确定对驾驶员可见和不可见的东西。图像捕获设备122可以位于后视镜附近的任何位置,但是将图像捕获设备122放置在镜子的驾驶员侧上可以进一步帮助获得代表驾驶员的视场和/或视线的图像。The image capture device included on the vehicle 200 as part of the image capture unit 120 may be located in any suitable location. In some embodiments, as shown in Figures 2A-2E and 3A-3C, the image capture device 122 may be located near the rearview mirror. This location may provide a line of sight similar to that of the driver of the vehicle 200, which may help determine what is and is not visible to the driver. The image capture device 122 may be located anywhere near the rearview mirror, but placing the image capture device 122 on the driver's side of the mirror may further aid in obtaining an image representative of the driver's field of view and/or line of sight.

也可以使用图像获取单元120的图像捕获设备的其他位置。例如,图像捕获设备124可以位于车辆200的保险杠上或保险杠中。这种位置可以特别适合于具有宽视场的图像捕获设备。位于保险杠的图像捕获设备的视线可能与驾驶员的视线不同,因此,保险杠图像捕获设备和驾驶员可能并不总是看到相同的对象。图像捕获设备(例如图像捕获设备122、124和126)也可以位于其他位置。例如,图像捕获设备可以位于车辆200的侧镜的一个或两个上或之中、位于车辆200的顶板上、位于车辆200的引擎盖上、位于车辆200的后备箱上、位于车辆200的侧面、安装在车辆200的任何窗户上、定位在窗户的后方或前方以及安装在车辆200的前面和/或后面的灯图中或附近。Other locations of the image capture device of the image acquisition unit 120 may also be used. For example, image capture device 124 may be located on or in a bumper of vehicle 200 . Such a location may be particularly suitable for image capture devices with a wide field of view. The line of sight of the image capture device located on the bumper may not be the same as that of the driver, therefore, the bumper image capture device and the driver may not always see the same objects. Image capture devices such as image capture devices 122, 124, and 126 may also be located at other locations. For example, the image capture device may be located on or in one or both of the side mirrors of the vehicle 200, on the roof of the vehicle 200, on the hood of the vehicle 200, on the trunk of the vehicle 200, on the side of the vehicle 200 , mounted on any window of the vehicle 200 , positioned behind or in front of a window, and mounted in or near a light pattern on the front and/or rear of the vehicle 200 .

除了图像捕获设备之外,车辆200可以包括系统100的各种其他部件。例如,处理单元110可以包括在车辆200上,与车辆的发动机控制单元(ECU)集成在一起或与之分开。车辆200还可以配备有位置传感器130,比如GPS接收器,并且还可以包括地图数据库160以及存储单元140和150。In addition to the image capture device, vehicle 200 may include various other components of system 100 . For example, processing unit 110 may be included on vehicle 200, integrated with or separate from an engine control unit (ECU) of the vehicle. Vehicle 200 may also be equipped with a position sensor 130 , such as a GPS receiver, and may also include a map database 160 and storage units 140 and 150 .

如前所述,无线收发器172可以和/或通过一个或多个网络(例如蜂窝网络、互联网等)接收数据。例如,无线收发器172可以将系统100收集的数据上传到一个或多个服务器,并且可以从一个或多个服务器下载数据。经由无线收发器172,系统100可以例如接收对存储在地图数据库160、存储器140和/或存储器150中的数据的定期或按需更新。类似地,无线收发器172可以将来自系统100的任何数据(例如由图像获取单元120捕获的图像、由位置传感器130或其他传感器、车辆控制系统等接收的数据)和/或由处理单元110处理的任何数据上传到一个或多个服务器。As previously mentioned, the wireless transceiver 172 can receive data and/or over one or more networks (eg, cellular networks, the Internet, etc.). For example, wireless transceiver 172 may upload data collected by system 100 to one or more servers, and may download data from one or more servers. Via wireless transceiver 172 , system 100 may, for example, receive periodic or on-demand updates to data stored in map database 160 , memory 140 , and/or memory 150 . Similarly, wireless transceiver 172 may receive any data from system 100 (eg, images captured by image acquisition unit 120 , data received by position sensor 130 or other sensors, vehicle control system, etc.) and/or be processed by processing unit 110 Any data uploaded to one or more servers.

系统100可以基于隐私级别设置将数据上传到服务器(例如上传到云)。例如,系统100可以实施隐私级别设置以调节或限制发送到服务器的数据类型(包括元数据),该数据可以唯一地识别车辆和/或车辆的驾驶员/所有者。这种设置可以由用户通过例如无线收发器172设置,通过出厂默认设置或通过由无线收发器172接收的数据进行初始化。System 100 may upload data to a server (eg, to the cloud) based on privacy level settings. For example, the system 100 may implement privacy level settings to regulate or limit the type of data (including metadata) sent to the server that can uniquely identify the vehicle and/or the driver/owner of the vehicle. Such settings may be set by the user via, for example, wireless transceiver 172 , initialized by factory default settings, or by data received by wireless transceiver 172 .

在一些实施例中,系统100可以根据“高”隐私级别来上传数据,并且在对设置进行设置下,系统100可以在没有关于特定车辆和/或驾驶员/所有者的任何细节的情况下传输数据(例如与路线有关的位置信息、捕获的图像等)。例如,当根据“高”隐私设置上传数据时,系统100可以不包括车辆识别号(VIN)或车辆的驾驶员或所有者的姓名,并且可以代替传输数据,比如捕获的图像和/或与路线有关的有限位置信息。In some embodiments, the system 100 can upload data according to a "high" privacy level, and with settings set, the system 100 can transmit without any details about the specific vehicle and/or driver/owner Data (such as location information related to routes, captured images, etc.). For example, when uploading data under a "high" privacy setting, the system 100 may not include a vehicle identification number (VIN) or the name of the driver or owner of the vehicle, and may instead transmit data, such as captured images and/or related routes Relevant limited location information.

预期其他隐私级别。例如,系统100可以根据“中间”隐私级别将数据传输到服务器,并且包括在“高”隐私级别下不包括的附加信息,比如车辆的品牌和/或型号和/或车辆类型(例如乘用车、越野车、卡车等)。在一些实施例中,系统100可以根据“低”隐私级别上传数据。在“低”隐私级别设置下,系统100可以上传数据并且包括足以唯一地识别特定车辆、所有者/驾驶员和/或车辆所行进的路线的一部分或全部的信息。这种“低”隐私级别数据可以包括例如以下各项中的一项或多项:VIN、驾驶员/所有者姓名、出发之前车辆的始发点、车辆的预期目的地、车辆的品牌和/或型号、车辆类型等。Other privacy levels are expected. For example, the system 100 may transmit data to the server according to a "medium" privacy level and include additional information not included at a "high" privacy level, such as the make and/or model of the vehicle and/or the type of vehicle (e.g., passenger car , off-road vehicles, trucks, etc.). In some embodiments, system 100 may upload data according to a "low" privacy level. At a "low" privacy level setting, the system 100 may upload data and include sufficient information to uniquely identify a particular vehicle, owner/driver, and/or part or all of a route traveled by the vehicle. Such "low" privacy level data may include, for example, one or more of the following: VIN, driver/owner name, point of origin of the vehicle prior to departure, intended destination of the vehicle, make of the vehicle, and/or Or model, vehicle type, etc.

图2A是与所公开的实施例一致的示例性车辆成像系统的示意性侧视图表示。图2B是图2A所示的实施例的示意性俯视图。如图2B所示,所公开的实施例可以包括车辆200,该车辆200在其主体中包括系统100,该系统100具有定位在车辆200的后视镜附近和/或靠近驾驶员的第一图像捕获设备122、定位在车辆200的保险杠区域(例如保险杠区域210之一)上或之中的第二图像捕获设备124以及处理单元110。2A is a schematic side view representation of an exemplary vehicle imaging system consistent with disclosed embodiments. Figure 2B is a schematic top view of the embodiment shown in Figure 2A. As shown in FIG. 2B , the disclosed embodiments may include a vehicle 200 including the system 100 in its body with the first image positioned near the rearview mirror of the vehicle 200 and/or proximate to the driver. Capture device 122 , second image capture device 124 positioned on or in a bumper area of vehicle 200 , such as one of bumper areas 210 , and processing unit 110 .

如图2C所示,图像捕获设备122和124都可以定位在车辆200的后视镜附近和/或靠近驾驶员。另外,尽管在图2B和2C中示出了两个图像捕获设备122和124,但应当理解的是,其他实施例可以包括两个以上的图像捕获设备。例如,在图2D和2E所示的实施例中,第一、第二和第三图像捕获设备122、124、126包括在车辆200的系统100中。As shown in FIG. 2C , both image capture devices 122 and 124 may be positioned near a rearview mirror of vehicle 200 and/or in close proximity to the driver. Additionally, although two image capture devices 122 and 124 are shown in Figures 2B and 2C, it should be understood that other embodiments may include more than two image capture devices. For example, in the embodiment shown in FIGS. 2D and 2E , the first, second, and third image capture devices 122 , 124 , 126 are included in the system 100 of the vehicle 200 .

如图2D所示,图像捕获设备122可以定位在车辆200的后视镜附近和/或靠近驾驶员,并且图像捕获设备124和126可以定位在车辆200的保险杠区域(例如保险杠区域210之一)上或之中。并且如图2E所示,图像捕获设备122、124和126可以定位在车辆200的后视镜附近和/或靠近驾驶员座椅。所公开的实施例不限于图像捕获设备的任何特定数量和配置,并且图像捕获设备可以定位在车辆200内和/或其上的任何适当位置。As shown in FIG. 2D , image capture device 122 may be positioned near a rearview mirror of vehicle 200 and/or near the driver, and image capture devices 124 and 126 may be positioned in a bumper area of vehicle 200 (eg, between bumper area 210 ). 1) On or in. And as shown in FIG. 2E , the image capture devices 122 , 124 , and 126 may be positioned near the rearview mirror of the vehicle 200 and/or near the driver's seat. The disclosed embodiments are not limited to any particular number and configuration of image capture devices, and the image capture devices may be positioned in any suitable location within and/or on the vehicle 200 .

应当理解,所公开的实施例不限于车辆,并且可以在其他情况下应用。还应理解,所公开的实施例不限于特定类型的车辆200,并且可适用于所有类型的车辆,包括汽车、卡车、拖车及其他类型的车辆。It should be understood that the disclosed embodiments are not limited to vehicles and may find application in other contexts. It should also be understood that the disclosed embodiments are not limited to a particular type of vehicle 200, and are applicable to all types of vehicles, including automobiles, trucks, trailers, and other types of vehicles.

第一图像捕获设备122可以包括任何合适类型的图像捕获设备。图像捕获设备122可以包括光轴。在一实例中,图像捕获设备122可包括具有全局快门的Aptina M9V024 WVGA传感器。在其他实施例中,图像捕获设备122可以提供1280x960像素的分辨率,并且可以包括卷帘快门。图像捕获设备122可以包括各种光学元件。在一些实施例中,例如可以包括一个或多个透镜,以为图像捕获设备提供期望的焦距和视场。在一些实施例中,图像捕获设备122可以与6mm透镜或12mm透镜相关。在一些实施例中,如图2D所示,图像捕获设备122可以配置成捕获具有期望的视场(FOV)202的图像。例如,图像捕获设备122可以配置成具有规则的FOV,比如在40度至56度的范围内,包括46度FOV、50度FOV、52度FOV或更大。可替代地,图像捕获设备122可以配置成具有在23度至40度范围内的窄FOV,比如28度FOV或36度FOV。另外,图像捕获设备122可以配置成具有在100度至180度范围内的宽FOV。在一些实施例中,图像捕获设备122可以包括广角保险杠相机或具有高达180度FOV的相机。在一些实施例中,图像捕获设备122可以是具有约2:1的纵横比(例如HxV=3800x1900像素)和具有约100度水平FOV的7.2M像素图像捕获设备。可以使用这种图像捕获设备代替三个图像捕获设备配置。由于明显的透镜畸变,在图像捕获设备使用径向对称透镜的实施方式中,这种图像捕获设备的竖直FOV可以明显小于50度。例如,这种透镜可能不是径向对称的,这将允许竖直FOV大于50度且水平FOV为100度。First image capture device 122 may comprise any suitable type of image capture device. Image capture device 122 may include an optical axis. In an example, image capture device 122 may include an Aptina M9V024 WVGA sensor with a global shutter. In other embodiments, image capture device 122 may provide a resolution of 1280x960 pixels and may include a rolling shutter. Image capture device 122 may include various optical elements. In some embodiments, for example, one or more lenses may be included to provide a desired focal length and field of view for the image capture device. In some embodiments, image capture device 122 may be associated with a 6mm lens or a 12mm lens. In some embodiments, image capture device 122 may be configured to capture an image having a desired field of view (FOV) 202 , as shown in FIG. 2D . For example, image capture device 122 may be configured to have a regular FOV, such as in the range of 40 degrees to 56 degrees, including 46 degree FOV, 50 degree FOV, 52 degree FOV, or greater. Alternatively, the image capture device 122 may be configured with a narrow FOV in the range of 23 degrees to 40 degrees, such as a 28 degree FOV or a 36 degree FOV. Additionally, image capture device 122 may be configured to have a wide FOV in the range of 100 degrees to 180 degrees. In some embodiments, image capture device 122 may include a wide-angle bumper camera or a camera with an FOV of up to 180 degrees. In some embodiments, image capture device 122 may be a 7.2M pixel image capture device with an aspect ratio of approximately 2:1 (eg, HxV = 3800x1900 pixels) and with a horizontal FOV of approximately 100 degrees. Such an image capture device can be used instead of a three image capture device configuration. In embodiments where an image capture device uses radially symmetric lenses, the vertical FOV of such an image capture device may be significantly less than 50 degrees due to significant lens distortion. For example, such a lens may not be radially symmetric, which would allow a vertical FOV greater than 50 degrees and a horizontal FOV of 100 degrees.

第一图像捕获设备122可以获取关于与车辆200相关的场景的多个第一图像。多个第一图像中的每个可被获取为一系列图像扫描线,其可以通过使用卷帘快门而被捕获。每条扫描线可以包括多个像素。The first image capture device 122 may acquire a plurality of first images of a scene related to the vehicle 200 . Each of the plurality of first images may be acquired as a series of image scanlines, which may be captured using a rolling shutter. Each scan line may include multiple pixels.

第一图像捕获设备122可以具有与第一系列图像扫描线中的每一个的获取相关的扫描速率。扫描速率可以指图像传感器可以获取与特定扫描线中包括的每个像素相关的图像数据的速率。The first image capture device 122 may have a scan rate associated with the acquisition of each of the first series of image scanlines. A scan rate may refer to a rate at which an image sensor may acquire image data related to each pixel included in a specific scan line.

图像捕获设备122、124和126可以包含任何合适类型和数量的图像传感器,例如包括CCD传感器或CMOS传感器。在一实施例中,可以将CMOS图像传感器与卷帘快门一起使用,使得一次读取一行中的每个像素,并且逐行地进行行的扫描,直到整个图像帧已被捕获。在一些实施例中,可以相对于帧从上到下顺序地捕获行。Image capture devices 122, 124, and 126 may include any suitable type and number of image sensors, including, for example, CCD sensors or CMOS sensors. In an embodiment, a CMOS image sensor may be used with a rolling shutter such that each pixel in a row is read at a time, and the scanning of the rows is done row by row until the entire image frame has been captured. In some embodiments, rows may be captured sequentially from top to bottom with respect to the frame.

在一些实施例中,本文公开的一个或多个图像捕获设备(例如图像捕获设备122、124和126)可以构成高分辨率成像器并且可以具有大于5M像素、7M像素、10M像素或更大的分辨率。In some embodiments, one or more of the image capture devices disclosed herein (eg, image capture devices 122, 124, and 126) may constitute a high-resolution imager and may have a resolution greater than 5M pixels, 7M pixels, 10M pixels, or larger resolution.

卷帘快门的使用可能导致不同行中的像素在不同时间被曝光和捕获,这可能导致捕获的图像帧中的歪斜和其他图像伪影。另一方面,当图像捕获设备122配置成与全局或同步快门一起操作时,所有像素可以在相同的时间量内并且在共同的曝光时间段内曝光。结果,从采用全局快门的系统收集的帧中的图像数据表示在特定时间处的整个FOV(比如FOV202)的快照。相反,在卷帘快门应用中,一帧中的每一行都被暴露,并且数据在不同的时间处被捕获。因此,在具有卷帘快门的图像捕获设备中,运动的对象可能看起来变形。该现象将在下面更详细地描述。The use of a rolling shutter can cause pixels in different rows to be exposed and captured at different times, which can lead to skew and other image artifacts in the captured image frame. On the other hand, when the image capture device 122 is configured to operate with a global or synchronized shutter, all pixels may be exposed for the same amount of time and during a common exposure time period. As a result, image data in frames collected from a system employing a global shutter represents a snapshot of the entire FOV (such as FOV 202 ) at a particular time. In contrast, in rolling shutter applications, each line in a frame is exposed and data is captured at different times. Therefore, in an image capture device with a rolling shutter, moving objects may appear distorted. This phenomenon will be described in more detail below.

第二图像捕获设备124和第三图像捕获设备126可以是任何类型的图像捕获设备。如同第一图像捕获设备122一样,图像捕获设备124和126中的每个可以包括光轴。在一实施例中,图像捕获设备124和126中的每个可以包括具有全局快门的Aptina M9V024 WVGA传感器。可替代地,图像捕获设备124和126中的每个可以包括卷帘快门。如同图像捕获设备122一样,图像捕获设备124和126可以配置成包括各种透镜和光学元件。在一些实施例中,与图像捕获设备124和126相关的透镜可以提供与与图像捕获设备122相关的FOV(比如FOV 202)相同或比其更窄的FOV(比如FOV 204和206)。例如,图像捕获设备124和126可以具有40度、30度、26度、23度、20度或更小的FOV。Second image capture device 124 and third image capture device 126 may be any type of image capture device. As with first image capture device 122, each of image capture devices 124 and 126 may include an optical axis. In an embodiment, each of image capture devices 124 and 126 may include an Aptina M9V024 WVGA sensor with a global shutter. Alternatively, each of image capture devices 124 and 126 may include a rolling shutter. Like image capture device 122, image capture devices 124 and 126 may be configured to include various lenses and optical elements. In some embodiments, the lenses associated with image capture devices 124 and 126 may provide a FOV (such as FOVs 204 and 206 ) that is the same as or narrower than the FOV (such as FOV 202 ) associated with image capture device 122 . For example, image capture devices 124 and 126 may have a FOV of 40 degrees, 30 degrees, 26 degrees, 23 degrees, 20 degrees, or less.

图像捕获设备124和126可以获取关于与车辆200相关的场景的多个第二和第三图像。多个第二和第三图像中的每一个可被获取为第二和第三系列图像扫描线,其可以使用卷帘快门而被捕获。每个扫描线或行可以具有多个像素。图像捕获设备124和126可以具有与第二和第三系列中包括的每个图像扫描线的获取相关的第二和第三扫描速率。Image capture devices 124 and 126 may acquire a plurality of second and third images of a scene related to vehicle 200 . Each of the plurality of second and third images may be acquired as a second and third series of image scanlines, which may be captured using a rolling shutter. Each scan line or row can have multiple pixels. Image capture devices 124 and 126 may have second and third scan rates associated with the acquisition of each image scan line included in the second and third series.

每个图像捕获设备122、124和126可以定位在相对于车辆200的任何合适位置和方位。可以选择图像捕获设备122、124和126的相对定位以有助于将从图像捕获设备获取的信息融合在一起。例如,在一些实施例中,与图像捕获设备124相关的FOV(比如FOV204)可以与与图像捕获设备122相关的FOV(比如FOV202)和与图像捕获设备126相关的FOV(比如FOV206)部分地或完全地重叠。Each image capture device 122 , 124 , and 126 may be positioned in any suitable position and orientation relative to the vehicle 200 . The relative positioning of image capture devices 122, 124, and 126 may be selected to facilitate fusing together the information obtained from the image capture devices. For example, in some embodiments, the FOV associated with image capture device 124 (such as FOV 204 ) may be partially or in part with the FOV associated with image capture device 122 (such as FOV 202 ) and the FOV associated with image capture device 126 (such as FOV 206 ). overlap completely.

图像捕获设备122、124和126可以以任何合适的相对高度位于车辆200上。在一实例中,图像捕获设备122、124和126之间可能存在高度差,这可以提供足够的视差信息以实现立体分析。例如,如图2A所示,两个图像捕获设备122和124处于不同的高度。例如,图像捕获设备122、124和126之间也可能存在横向位移差,从而给出了附加的视差信息以供处理单元110进行立体分析。如图2C和2D所示,横向位移差可以用dx表示。在一些实施例中,在图像捕获设备122、124和126之间可以存在前后位移(例如范围位移)。例如,图像捕获设备122可以位于图像捕获设备124和/或图像捕获设备126之后0.5至2米或更多。这种类型的位移可以使其中一个图像捕获设备能够覆盖其他图像捕获设备的潜在盲点。Image capture devices 122 , 124 , and 126 may be located on vehicle 200 at any suitable relative height. In one example, there may be a height difference between image capture devices 122, 124, and 126, which may provide sufficient disparity information to enable stereoscopic analysis. For example, as shown in Figure 2A, the two image capture devices 122 and 124 are at different heights. For example, there may also be lateral displacement differences between image capture devices 122, 124, and 126, giving additional disparity information for stereoscopic analysis by processing unit 110. As shown in Figures 2C and 2D, the lateral displacement difference can be represented by dx. In some embodiments, there may be a front-to-back displacement (eg, range displacement) between image capture devices 122 , 124 , and 126 . For example, image capture device 122 may be located 0.5 to 2 meters or more behind image capture device 124 and/or image capture device 126 . This type of displacement can enable one of the image capture devices to cover the potential blind spot of the other image capture device.

图像捕获设备122可以具有任何合适的分辨率能力(例如与图像传感器相关的像素数),并且与图像捕获设备122相关的图像传感器的分辨率可以更高、更低或与与图像捕获设备124和126相关的图像传感器的分辨率相同。在一些实施例中,与图像捕获设备122和/或图像捕获设备124和126相关的图像传感器可以具有640x480、1024x768、1280x960的分辨率或其他任何合适的分辨率。Image capture device 122 may have any suitable resolution capability (e.g., the number of pixels associated with image sensor), and the resolution of the image sensor associated with image capture device 122 may be higher, lower, or comparable to that of image capture device 124 and The resolution of the 126 related image sensors is the same. In some embodiments, image sensors associated with image capture device 122 and/or image capture devices 124 and 126 may have a resolution of 640x480, 1024x768, 1280x960, or any other suitable resolution.

帧速率(例如图像捕获设备在继续捕获与下一图像帧相关的像素数据之前获取一个图像帧的一组像素数据的速率)是可控制的。与图像捕获设备122相关的帧速率可以更高、更低或与与图像捕获设备124和126相关的帧速率相同。与图像捕获设备122、124和126相关的帧速率可以取决于可能影响帧速率定时的各种因素。例如,图像捕获设备122、124和126中的一个或多个可以包括在获取与图像捕获设备122、124和/或126中的图像传感器的一个或多个像素相关的图像数据之前或之后施加的可选像素延迟时段。通常,可以根据设备的时钟速率(例如每个时钟周期一个像素)来获取与每个像素相对应的图像数据。另外,在包括卷帘快门的实施例中,图像捕获设备122、124和126中的一个或多个可以包括在获取与图像捕获设备122、124和/或126中的图像传感器的一行像素相关的图像数据之前或之后施加的可选水平消隐时段。此外,图像捕获设备122、124和/或126中的一个或多个可以包括在获取与图像捕获设备122、124和126的图像帧相关的图像数据之前或之后施加的可选竖直消隐时段。The frame rate (eg, the rate at which an image capture device acquires a set of pixel data for one image frame before continuing to capture pixel data associated with the next image frame) is controllable. The frame rate associated with image capture device 122 may be higher, lower, or the same as the frame rate associated with image capture devices 124 and 126 . Frame rates associated with image capture devices 122, 124, and 126 may depend on various factors that may affect frame rate timing. For example, one or more of image capture devices 122, 124, and 126 may include a Optional pixel delay period. Typically, image data corresponding to each pixel may be acquired according to the clock rate of the device (eg, one pixel per clock cycle). Additionally, in embodiments that include rolling shutters, one or more of image capture devices 122, 124, and 126 may include a mechanism for acquiring a pixel associated with a row of image sensors in image capture devices 122, 124, and/or 126. An optional horizontal blanking period to apply before or after the image data. Additionally, one or more of image capture devices 122, 124, and/or 126 may include an optional vertical blanking period applied before or after acquiring image data associated with image frames of image capture devices 122, 124, and 126 .

即使每个的行扫描速率不同,这些定时控制也可以使与图像捕获设备122、124和126相关的帧速率同步。另外,如将在下面更详细地讨论,即使图像捕获设备122的视场不同于图像捕获设备124和126的FOV,这些可选定时控制以及其他因素(例如图像传感器分辨率、最大行扫描速率等)也可以使来自图像捕获设备122的FOV与图像捕获设备124和126的一个或多个FOV重叠的区域的图像捕获同步。These timing controls allow the synchronization of the frame rates associated with image capture devices 122, 124, and 126, even though each has a different line scan rate. Additionally, as will be discussed in more detail below, even though the field of view of image capture device 122 is different than the FOVs of image capture devices 124 and 126, these optional timing controls, along with other factors such as image sensor resolution, maximum line scan rate, etc.) may also synchronize image capture from areas where the FOV of image capture device 122 overlaps with one or more FOVs of image capture devices 124 and 126 .

图像捕获设备122、124和126中的帧速率定时可以取决于相关的图像传感器的分辨率。例如,假设两个设备的线扫描速率相似,如果一个设备包括分辨率为640x480的图像传感器,而另一个设备包括分辨率为1280x960的图像传感器,则将需要更多时间以从具有高分辨率的传感器获取图像数据的帧。Frame rate timing in image capture devices 122, 124, and 126 may depend on the resolution of the associated image sensor. For example, assuming two devices have similar line scan rates, if one device includes an image sensor with a resolution of 640x480 and the other device includes an image sensor with a resolution of 1280x960, it will take more The sensor acquires frames of image data.

可能影响图像捕获设备122、124和126中的图像数据获取的定时的另一个因素是最大线扫描速率。例如,从包括在图像捕获设备122、124和126中的图像传感器获取一行图像数据将需要一些最小时间量。假设不增加像素延迟时段,则用于获取一行图像数据的最小时间量将与特定设备的最大线扫描速率有关。与最大线扫描速率较低的设备相比,提供最大线扫描速率较高的设备有可能提供更高的帧速率。在一些实施例中,图像捕获设备124和126中的一个或多个可具有的最大线扫描速率高于与图像捕获设备122相关的最大线扫描速率。在一些实施例中,图像捕获设备124和/或126的最大线扫描速率可以是图像捕获设备122的最大线扫描速率的1.25、1.5、1.75或2倍或者更多倍。Another factor that may affect the timing of image data acquisition in image capture devices 122, 124, and 126 is the maximum line scan rate. For example, acquiring a line of image data from the image sensors included in image capture devices 122, 124, and 126 will require some minimum amount of time. Assuming no increase in the pixel delay period, the minimum amount of time to acquire a line of image data will be related to the maximum line scan rate for a particular device. A device that offers a higher maximum line scan rate has the potential to provide a higher frame rate than a device with a lower maximum line scan rate. In some embodiments, one or more of image capture devices 124 and 126 may have a higher maximum line scan rate than the maximum line scan rate associated with image capture device 122 . In some embodiments, the maximum line scan rate of image capture device 124 and/or 126 may be 1.25, 1.5, 1.75, or 2 or more times the maximum line scan rate of image capture device 122 .

在另一实施例中,图像捕获设备122、124和126可以具有相同的最大线扫描速率,但是图像捕获设备122可以以小于或等于其最大扫描速率的扫描速率进行操作。该系统可以配置成使得图像捕获设备124和126中的一个或多个以等于图像捕获设备122的线扫描率的线扫描率进行操作。在其他实例中,该系统可以配置成使得图像捕获设备124和/或图像捕获设备126的线扫描速率可以是图像捕获设备122的线扫描速率的1.25、1.5、1.75或2倍或者更多倍。In another embodiment, image capture devices 122, 124, and 126 may have the same maximum line scan rate, but image capture device 122 may operate at a scan rate less than or equal to its maximum scan rate. The system may be configured such that one or more of image capture devices 124 and 126 operates at a line scan rate equal to that of image capture device 122 . In other examples, the system may be configured such that the line scan rate of image capture device 124 and/or image capture device 126 may be 1.25, 1.5, 1.75, or 2 or more times the line scan rate of image capture device 122 .

在一些实施例中,图像捕获设备122、124和126可以是不对称的。也就是说,它们可以包括具有不同视场(FOV)和焦距的相机。例如,图像捕获设备122、124和126的视场可以包括相对于车辆200的环境的任何期望的区域。在一些实施例中,图像捕获设备122、124和126中的一个或多个可以配置成从车辆200前面、车辆200后面、到车辆200的侧面或它们的组合的环境获取图像数据。In some embodiments, image capture devices 122, 124, and 126 may be asymmetrical. That is, they can include cameras with different fields of view (FOV) and focal lengths. For example, the fields of view of image capture devices 122 , 124 , and 126 may include any desired area relative to the environment of vehicle 200 . In some embodiments, one or more of image capture devices 122 , 124 , and 126 may be configured to acquire image data from the environment in front of vehicle 200 , behind vehicle 200 , to the sides of vehicle 200 , or combinations thereof.

此外,与每个图像捕获设备122、124和/或126相关的焦距可以是可选择的(例如通过包括适当的透镜等),使得每个设备在相对于车辆200的期望的距离范围内获取对象的图像。例如,在一些实施例中,图像捕获设备122、124和126可以获取距车辆几米之内的特写对象的图像。图像捕获设备122、124和126还可以配置成获取在距车辆更远的范围(例如25m、50m、100m、150m或更大)处的对象的图像。此外,可以选择图像捕获设备122、124和126的焦距,使得一个图像捕获设备(例如图像捕获设备122)可以获取相对靠近车辆(例如在10m或20m内)的对象的图像,而其他图像捕获设备(例如图像捕获设备124和126)可以获取距车辆200更远距离(例如大于20m、50m、100m、150m等)的对象的图像。Additionally, the focal length associated with each image capture device 122, 124, and/or 126 may be selectable (eg, by including appropriate lenses, etc.) such that each device captures objects within a desired range of distances relative to the vehicle 200 Image. For example, in some embodiments, image capture devices 122, 124, and 126 may capture images of close-up objects within a few meters of the vehicle. The image capture devices 122, 124 and 126 may also be configured to acquire images of objects at greater distances from the vehicle (eg 25m, 50m, 100m, 150m or more). In addition, the focal lengths of image capture devices 122, 124, and 126 can be selected such that one image capture device (eg, image capture device 122) can acquire images of objects that are relatively close to the vehicle (eg, within 10 m or 20 m), while the other image capture devices Images of objects at greater distances (eg, greater than 20m, 50m, 100m, 150m, etc.) from the vehicle 200 may be acquired by image capture devices (eg, image capture devices 124 and 126).

根据一些实施例,一个或多个图像捕获设备122、124和126的FOV可以具有广角。例如,具有140度的FOV可能是有利的,特别是对于可用于捕获车辆200附近区域的图像的图像捕获设备122、124和126。例如,图像捕获设备122可用于捕获车辆200的右侧或左侧区域的图像,并且在这种实施例中,可能期望图像捕获设备122具有宽FOV(例如至少140度)。According to some embodiments, the FOV of one or more image capture devices 122, 124, and 126 may have a wide angle. For example, having a FOV of 140 degrees may be advantageous, particularly for image capture devices 122 , 124 , and 126 that may be used to capture images of the area near vehicle 200 . For example, image capture device 122 may be used to capture images of areas to the right or left of vehicle 200, and in such embodiments, it may be desirable for image capture device 122 to have a wide FOV (eg, at least 140 degrees).

与图像捕获设备122、124和126中的每一个相关的视场可以取决于各自的焦距。例如,随着焦距增加,相应的视场减小。The fields of view associated with each of image capture devices 122, 124, and 126 may depend on the respective focal lengths. For example, as the focal length increases, the corresponding field of view decreases.

图像捕获设备122、124和126可以配置成具有任何合适的视场。在一特定示例中,图像捕获设备122可以具有46度的水平FOV,图像捕获设备124可以具有23度的水平FOV,并且图像捕获设备126可以具有在23度至46度之间的水平FOV。在另一实例中,图像捕获设备122可以具有52度的水平FOV,图像捕获设备124可以具有26度的水平FOV,并且图像捕获设备126可以具有在26度至52度之间的水平FOV。在一些实施例中,图像捕获设备122的FOV与图像捕获设备124和/或图像捕获设备126的FOV的比率可以从1.5到2.0变化。在其他实施例中,该比率可以在1.25与2.25之间变化。Image capture devices 122, 124, and 126 may be configured to have any suitable field of view. In a particular example, image capture device 122 may have a horizontal FOV of 46 degrees, image capture device 124 may have a horizontal FOV of 23 degrees, and image capture device 126 may have a horizontal FOV of between 23 degrees and 46 degrees. In another example, image capture device 122 may have a horizontal FOV of 52 degrees, image capture device 124 may have a horizontal FOV of 26 degrees, and image capture device 126 may have a horizontal FOV of between 26 degrees and 52 degrees. In some embodiments, the ratio of the FOV of image capture device 122 to the FOV of image capture device 124 and/or image capture device 126 may vary from 1.5 to 2.0. In other embodiments, this ratio may vary between 1.25 and 2.25.

系统100可以配置成使得图像捕获设备122的视场与图像捕获设备124和/或图像捕获设备126的视场至少部分地或完全地重叠。在一些实施例中,系统100可以配置成使得图像捕获设备124和126的视场例如落入图像捕获设备122的视场内(例如比之更窄)并且与之共享共同的中心。在其他实施例中,图像捕获设备122、124和126可以捕获相邻的FOV或者可以在它们的FOV中具有部分重叠。在一些实施例中,图像捕获设备122、124和126的视场可以对准,使得较窄FOV图像捕获设备124和/或126的中心可以位于较宽FOV设备122的视场的下半部。System 100 may be configured such that the field of view of image capture device 122 at least partially or completely overlaps the field of view of image capture device 124 and/or image capture device 126 . In some embodiments, system 100 may be configured such that the fields of view of image capture devices 124 and 126 fall within (eg, narrower than) the field of view of image capture device 122 and share a common center therewith, for example. In other embodiments, image capture devices 122, 124, and 126 may capture adjacent FOVs or may have partial overlap in their FOVs. In some embodiments, the fields of view of image capture devices 122 , 124 , and 126 may be aligned such that the center of narrower FOV image capture devices 124 and/or 126 may be located in the lower half of the field of view of wider FOV device 122 .

图2F是与所公开的实施例一致的示例性车辆控制系统的示意性表示。如图2F所示,车辆200可包括节流系统220、制动系统230和转向系统240。系统100可通过一个或多个数据链路(例如用于传输数据的任何一个或多个有线和/或无线链路)向节流系统220、制动系统230和转向系统240中的一个或多个提供输入(例如控制信号)。例如,基于对由图像捕获设备122、124和/或126获取的图像的分析,系统100可以向节流系统220、制动系统230和转向系统240中的一个或多个提供控制信号以导航车辆200(例如通过引起加速、转弯、车道偏移等)。此外,系统100可以从节流系统220、制动系统230和转向系统24中的一个或多个接收指示车辆200的运行状况(例如速度、车辆200是否正在制动和/或转弯等)的输入。下面结合图4-7提供进一步的细节。FIG. 2F is a schematic representation of an exemplary vehicle control system consistent with disclosed embodiments. As shown in FIG. 2F , vehicle 200 may include a throttle system 220 , a braking system 230 and a steering system 240 . System 100 may communicate with one or more of throttle system 220, braking system 230, and steering system 240 via one or more data links (eg, any one or more wired and/or wireless links for transmitting data) One provides input (such as a control signal). For example, based on analysis of images acquired by image capture devices 122, 124, and/or 126, system 100 may provide control signals to one or more of throttle system 220, braking system 230, and steering system 240 to navigate the vehicle 200 (eg, by causing acceleration, turning, lane departure, etc.). Additionally, system 100 may receive input from one or more of throttle system 220, braking system 230, and steering system 24 indicative of the operating conditions of vehicle 200 (eg, speed, whether vehicle 200 is braking and/or turning, etc.) . Further details are provided below in connection with Figures 4-7.

如图3A所示,车辆200还可以包括用于与车辆200的驾驶员或乘客交互的用户界面170。例如,车辆应用中的用户界面170可以包括触摸屏320、旋钮330、按钮340和麦克风350。车辆200的驾驶员或乘客还可以使用手柄(例如位于包括例如转向信号手柄的车辆200的转向柱上或附近)、按钮(例如位于车辆200的方向盘上)等,以与系统100进行交互。在一些实施例中,麦克风350可以定位成邻近后视镜310。类似地,在一些实施例中,图像捕获设备122可以位于后视镜310附近。在一些实施例中,用户界面170还可以包括一个或多个扬声器360(例如车辆音频系统的扬声器)。例如,系统100可以经由扬声器360提供各种通知(例如警报)。As shown in FIG. 3A , the vehicle 200 may also include a user interface 170 for interacting with a driver or passenger of the vehicle 200 . For example, user interface 170 in a vehicle application may include touch screen 320 , knob 330 , buttons 340 and microphone 350 . A driver or passenger of the vehicle 200 may also use handles (eg, located on or near the steering column of the vehicle 200 including, for example, a turn signal handle), buttons (eg, located on the steering wheel of the vehicle 200 ), etc., to interact with the system 100 . In some embodiments, microphone 350 may be positioned adjacent rearview mirror 310 . Similarly, image capture device 122 may be located near rearview mirror 310 in some embodiments. In some embodiments, the user interface 170 may also include one or more speakers 360 (eg, speakers of a vehicle audio system). For example, system 100 may provide various notifications (eg, alarms) via speaker 360 .

图3B-3D是与所公开的实施例一致的示例性相机安装座370的图示,该相机安装座370配置成定位在后视镜(例如后视镜310)后面并且紧靠车辆挡风玻璃。如图3B所示,相机安装座370可以包括图像捕获设备122、124和126。图像捕获设备124和126可以位于防眩罩380后面,该防眩罩380可以与车辆挡风玻璃齐平并且包括膜和/或防反射材料的成分。例如,防眩罩380可以定位成使得该罩紧靠具有匹配斜率的车辆挡风玻璃对准。在一些实施例中,例如,图像捕获设备122、124和126中的每一个可以定位在防眩罩380后面,如图3D所示。所公开的实施例不限于图像捕获设备122、124和126、相机安装座370和防眩罩380的任何特定配置。图3C是从正面观看的图3B所示的相机安装座370的图示。3B-3D are illustrations of an exemplary camera mount 370 configured to be positioned behind a rearview mirror (eg, rearview mirror 310 ) and in close proximity to a vehicle windshield, consistent with disclosed embodiments . As shown in FIG. 3B , camera mount 370 may include image capture devices 122 , 124 , and 126 . Image capture devices 124 and 126 may be located behind an anti-glare cover 380, which may be flush with the vehicle windshield and include a composition of film and/or anti-reflective material. For example, anti-glare cover 380 may be positioned such that the cover is aligned against a vehicle windshield having a matching slope. In some embodiments, for example, each of image capture devices 122, 124, and 126 may be positioned behind anti-glare shield 380, as shown in FIG. 3D. The disclosed embodiments are not limited to any particular configuration of image capture devices 122 , 124 , and 126 , camera mount 370 , and anti-glare shield 380 . FIG. 3C is an illustration of the camera mount 370 shown in FIG. 3B viewed from the front.

如受益于本公开的本领域技术人员将理解,可以对前述公开的实施例进行多种变化和/或修改。例如,并非所有部件对于系统100的操作都是必不可少的。此外,任何部件可以位于系统100的任何适当的部分中,并且部件可以重新布置成各种配置同时提供所公开的实施例的功能。因此,前述配置是示例,并且无论以上讨论的配置如何,系统100可以提供广泛的功能以分析车辆200的周围环境并响应于该分析来导航车辆200。Various changes and/or modifications may be made to the foregoing disclosed embodiments, as will be appreciated by those skilled in the art having the benefit of this disclosure. For example, not all components are essential to the operation of system 100 . Furthermore, any components may be located in any suitable portion of system 100, and components may be rearranged in various configurations while providing the functionality of the disclosed embodiments. Accordingly, the foregoing configurations are examples, and regardless of the configurations discussed above, the system 100 may provide a wide range of functionality to analyze the surrounding environment of the vehicle 200 and navigate the vehicle 200 in response to the analysis.

如下面进一步详细讨论并且与各种所公开的实施例一致,系统100可以提供与自主驾驶和/或驾驶员辅助技术有关的各种特征。例如,系统100可以分析图像数据、位置数据(例如GPS位置信息)、地图数据、速度数据和/或来自车辆200中包括的传感器的数据。系统100可以从例如图像获取单元120、位置传感器130和其他传感器中收集用于分析的数据。此外,系统100可以分析所收集的数据以确定车辆200是否应当采取某种动作,然后在没有人工干预的情况下自动采取所确定的动作。例如,当车辆200在没有人工干预的情况下导航时,系统100可以自动控制车辆200的制动、加速和/或转向(例如通过向节流系统220、制动系统230和转向系统240中的一个或多个发送控制信号)。此外,系统100可以分析所收集的数据并且基于对所收集的数据的分析来向车辆乘员发出警告和/或警报。下面提供关于由系统100提供的各种实施例的附加细节。As discussed in further detail below and consistent with the various disclosed embodiments, the system 100 may provide various features related to autonomous driving and/or driver assistance technologies. For example, system 100 may analyze image data, location data (eg, GPS location information), map data, speed data, and/or data from sensors included in vehicle 200 . System 100 may collect data for analysis from, for example, image acquisition unit 120, position sensor 130, and other sensors. Additionally, the system 100 may analyze the collected data to determine whether the vehicle 200 should take a certain action, and then automatically take the determined action without human intervention. For example, system 100 may automatically control the braking, acceleration, and/or steering of vehicle 200 while vehicle 200 is navigating without human intervention (e.g., one or more transmit control signals). Additionally, the system 100 may analyze the collected data and issue warnings and/or alerts to vehicle occupants based on the analysis of the collected data. Additional details regarding various embodiments provided by system 100 are provided below.

前向多成像系统forward multiple imaging system

如上所述,系统100可提供使用多相机系统的驾驶辅助功能。该多相机系统可以使用面向车辆的向前方向的一个或多个相机。在其他实施例中,多相机系统可以包括面向车辆侧面或车辆后部的一个或多个相机。在一实施例中,例如,系统100可以使用双相机成像系统,其中第一相机和第二相机(例如图像捕获设备122和124)可以位于车辆(例如车辆200)的前部和/或侧面。第一相机的视场可以比第二相机的视场更大、更小或与之部分地重叠。另外,第一相机可以连接至第一图像处理器以对由第一相机提供的图像执行单眼图像分析,并且第二相机可以连接至第二图像处理器以对由第二相机提供的图像进行单眼图像分析。第一和第二图像处理器的输出(例如处理后的信息)可被组合。在一些实施例中,第二图像处理器可以从第一相机和第二相机两者接收图像以执行立体分析。在另一实施例中,系统100可以使用三相机成像系统,其中每个相机具有不同的视场。因此,这种系统可以基于从位于车辆的前方和侧面的不同距离处的对象得出的信息来做出决定。对单眼图像分析的引用可以指基于从单个视点(例如从单个相机)捕获的图像执行图像分析的实例。立体图像分析可以指基于利用图像捕获参数的一个或多个变体而捕获的两个或更多个图像来执行图像分析的实例。例如,适于执行立体图像分析的捕获图像可以包括通过以下捕获的图像:从两个或更多个不同位置、从不同视场、使用不同焦距以及视差信息等。As described above, system 100 may provide driver assistance functionality using a multi-camera system. The multi-camera system may use one or more cameras facing the forward direction of the vehicle. In other embodiments, a multi-camera system may include one or more cameras facing the side of the vehicle or the rear of the vehicle. In one embodiment, for example, system 100 may utilize a dual camera imaging system, where first and second cameras (eg, image capture devices 122 and 124 ) may be located on the front and/or sides of a vehicle (eg, vehicle 200 ). The field of view of the first camera may be larger, smaller, or partially overlap the field of view of the second camera. In addition, the first camera can be connected to the first image processor to perform monocular image analysis on the image provided by the first camera, and the second camera can be connected to the second image processor to perform monocular image analysis on the image provided by the second camera. image analysis. The output (eg processed information) of the first and second image processors may be combined. In some embodiments, the second image processor may receive images from both the first camera and the second camera to perform stereoscopic analysis. In another embodiment, system 100 may use a three-camera imaging system, where each camera has a different field of view. Thus, such a system can make decisions based on information derived from objects located at different distances in front of and to the sides of the vehicle. Reference to monocular image analysis may refer to instances where image analysis is performed based on images captured from a single viewpoint (eg, from a single camera). Stereoscopic image analysis may refer to an instance where image analysis is performed based on two or more images captured with one or more variations of image capture parameters. For example, captured images suitable for performing stereoscopic image analysis may include images captured from two or more different positions, from different fields of view, using different focal lengths, disparity information, and the like.

例如,在一实施例中,系统100可以使用图像捕获设备122、124和126来实现三相机配置。在这种配置中,图像捕获设备122可以提供窄视场(例如34度或从约20到45度的范围中选择的其他值等),图像捕获设备124可以提供宽视场(例如150度或从约100到约180度的范围中选择的其他值),且图像捕获设备126可以提供中间视场(例如46度或从约35到约60度的范围中选择的其他值)。在一些实施例中,图像捕获设备126可以充当主相机或主要相机。图像捕获设备122、124和126可以定位在后视镜310后面,并且基本上并排定位(例如相距6cm)。此外,在一些实施例中,如上所述,图像捕获设备122、124和126中的一个或多个可以安装在与车辆200的挡风玻璃齐平的防眩罩380后面。这种罩可以最大程度地减少车内任何反射对图像捕获设备122、124和126的影响。For example, in one embodiment, system 100 may implement a three-camera configuration using image capture devices 122, 124, and 126. In such a configuration, image capture device 122 may provide a narrow field of view (e.g., 34 degrees or other value selected from a range of approximately 20 to 45 degrees, etc.), and image capture device 124 may provide a wide field of view (e.g., 150 degrees or other values selected from the range of about 100 to about 180 degrees), and image capture device 126 may provide an intermediate field of view (eg, 46 degrees or other values selected from the range of about 35 to about 60 degrees). In some embodiments, image capture device 126 may act as a primary or primary camera. Image capture devices 122, 124, and 126 may be positioned behind rearview mirror 310 and positioned substantially side-by-side (eg, 6 cm apart). Additionally, in some embodiments, one or more of image capture devices 122 , 124 , and 126 may be mounted behind a glare shield 380 flush with the windshield of vehicle 200 , as described above. Such an enclosure can minimize the effect of any reflections in the vehicle on the image capture devices 122, 124, and 126.

在另一实施例中,如以上结合图3B和3C所述,宽视场相机(例如以上示例中的图像捕获设备124)可以安装在窄视场和主视场相机(例如以上示例中的图像设备122和126)下方。该配置可以提供来自宽视场相机的自由视线。为了减少反射,可以将相机安装成靠近车辆200的挡风玻璃,并且可以在相机上包括偏振器以衰减反射光。In another embodiment, as described above in connection with FIGS. 3B and 3C , a wide field of view camera (such as image capture device 124 in the example above) can be mounted between a narrow field of view and a main field of view camera (such as image capture device 124 in the example above). devices 122 and 126) below. This configuration can provide a free line of sight from the wide field camera. To reduce reflections, the camera may be mounted close to the windshield of the vehicle 200 and a polarizer may be included on the camera to attenuate reflected light.

三相机系统可以提供某些性能特征。例如,一些实施例可以包括基于来自另一相机的检测结果来验证一个相机对对象的检测的能力。在上面所述的三相机配置中,处理单元110可以包括例如三个处理设备(例如如上所述的三个EyeQ系列处理器芯片),每个处理设备专用于处理由图像捕获设备122、124和126中的一个或多个捕获的图像。Three-camera systems can offer certain performance characteristics. For example, some embodiments may include the ability to verify detection of objects by one camera based on detection results from another camera. In the three-camera configuration described above, the processing unit 110 may include, for example, three processing devices (eg, three EyeQ series processor chips as described above), each dedicated to processing images generated by the image capture devices 122, 124 and One or more of the 126 captured images.

在三相机系统中,第一处理设备可以从主相机和窄视场相机接收图像,并对窄FOV相机执行视觉处理,例如以检测其他车辆、行人、车道标记、交通标志、交通信号灯及其他道路对象。此外,第一处理设备可以计算来自主相机和窄相机的图像之间的像素的视差,并创建车辆200的环境的3D重构。然后,第一处理设备可以将3D重构与3D地图数据或基于来自另一相机的信息计算出的3D信息组合。In a three-camera system, the first processing device can receive images from the main camera and the narrow field of view camera and perform vision processing on the narrow FOV camera, for example to detect other vehicles, pedestrians, lane markings, traffic signs, traffic lights, and other roadways object. Furthermore, the first processing device may calculate the disparity of pixels between the images from the main camera and the narrow camera and create a 3D reconstruction of the environment of the vehicle 200 . The first processing device may then combine the 3D reconstruction with 3D map data or 3D information calculated based on information from another camera.

第二处理设备可以从主相机接收图像并执行视觉处理以检测其他车辆、行人、车道标记、交通标志、交通信号灯及其他道路对象。另外,第二处理设备可以计算相机位移,并且基于该位移,计算连续图像之间的像素的视差,并创建场景的3D重构(例如来自运动的结构)。第二处理设备可以将该结构从基于运动的3D重构发送到第一处理设备以与立体3D图像组合。A second processing device may receive images from the main camera and perform vision processing to detect other vehicles, pedestrians, lane markings, traffic signs, traffic lights, and other road objects. Additionally, the second processing device may calculate camera displacement and based on this displacement, calculate the disparity of pixels between successive images and create a 3D reconstruction of the scene (eg structures from motion). The second processing device may send the structure from the motion-based 3D reconstruction to the first processing device for combination with the stereoscopic 3D image.

第三处理设备可以从宽FOV相机接收图像并处理图像以检测车辆、行人、车道标记、交通标志、交通信号灯及其他道路对象。第三处理设备可以进一步执行附加处理指令以分析图像来识别在图像中移动的对象,比如改变车道的车辆、行人等。A third processing device may receive images from the wide FOV camera and process the images to detect vehicles, pedestrians, lane markings, traffic signs, traffic lights, and other road objects. The third processing device may further execute additional processing instructions to analyze the images to identify objects moving in the images, such as vehicles changing lanes, pedestrians, etc.

在一些实施例中,具有被独立捕获和处理的基于图像的信息流可以提供在系统中提供冗余的机会。这样的冗余可以包括例如使用第一图像捕获设备和从该设备处理的图像以验证和/或补充通过从至少第二图像捕获设备捕获和处理图像信息而获得的信息。In some embodiments, having image-based information streams that are captured and processed independently may provide an opportunity to provide redundancy in the system. Such redundancy may include, for example, using a first image capture device and images processed from that device to verify and/or supplement information obtained by capturing and processing image information from at least a second image capture device.

在一些实施例中,系统100可以在为车辆200提供导航辅助时使用两个图像捕获设备(例如图像捕获设备122和124),并且使用第三图像捕获设备(例如图像捕获设备126)来提供冗余并验证对从其他两个图像捕获设备接收的数据的分析。例如,在这种配置中,图像捕获设备122和124可以提供图像用于由系统100进行立体分析来导航车辆200,而图像捕获设备126可以提供图像用于由系统100进行单眼分析以提供冗余并验证基于从图像捕获设备122和/或图像捕获设备124捕获的图像所获得的信息。也就是说,可以将图像捕获设备126(和相应的处理设备)视为提供冗余子系统,以提供对从图像捕获设备122和124导出的分析的检查(例如提供自动紧急制动(AEB)系统)。此外,在一些实施例中,可以基于从一个或多个传感器(例如雷达、激光雷达、声波传感器、从车辆外部的一个或多个收发器接收的信息等)接收的信息来补充所接收的数据的冗余和验证。In some embodiments, system 100 may use two image capture devices (eg, image capture devices 122 and 124 ) in providing navigation assistance to vehicle 200 and use a third image capture device (eg, image capture device 126 ) to provide redundancy. and verify the analysis of data received from the other two image capture devices. For example, in such a configuration, image capture devices 122 and 124 may provide images for stereoscopic analysis by system 100 to navigate vehicle 200, while image capture device 126 may provide images for monocular analysis by system 100 to provide redundancy. and verify information obtained based on images captured from image capture device 122 and/or image capture device 124 . That is, image capture device 126 (and corresponding processing device) may be considered as providing a redundant subsystem to provide checking of analyzes derived from image capture devices 122 and 124 (e.g., to provide automatic emergency braking (AEB) system). Additionally, in some embodiments, the received data may be supplemented based on information received from one or more sensors (eg, radar, lidar, acoustic wave sensors, information received from one or more transceivers external to the vehicle, etc.) redundancy and validation.

本领域技术人员将认识到,以上相机配置、相机放置、相机数量、相机位置等仅是示例。相对于整个系统描述的这些部件和其他部件可以在不脱离所公开的实施例的范围的情况下以各种不同的配置进行组装和使用。下面是关于使用多相机系统来提供驾驶员辅助和/或自主功能的更多细节。Those skilled in the art will recognize that the above camera configurations, camera placements, number of cameras, camera locations, etc. are examples only. These and other components described with respect to the overall system can be assembled and used in various different configurations without departing from the scope of the disclosed embodiments. Below are more details on the use of multi-camera systems to provide driver assistance and/or autonomous functions.

图4是存储器140和/或150的示例性功能框图,其可以存储有用于执行与所公开的实施例一致的一个或多个操作的指令/利用其进行编程。尽管以下是指存储器140,但本领域技术人员要认识到指令可以存储在存储器140和/或150中。4 is an example functional block diagram of memory 140 and/or 150, which may be stored/programmed with instructions for performing one or more operations consistent with the disclosed embodiments. Although the following refers to memory 140 , those skilled in the art will recognize that instructions may be stored in memory 140 and/or 150 .

如图4所示,存储器140可以存储单眼图像分析模块402、立体图像分析模块404、速度和加速度模块406以及导航响应模块408。公开的实施例不限于存储器140的任何特定配置。此外,应用处理器180和/或图像处理器190可以执行存储在存储器140中包括的模块402、404、406和408中的任何一个中的指令。本领域技术人员将理解,在以下讨论中对处理单元110的引用可以单独或共同地指代应用处理器180和图像处理器190。因此,可以由一个或多个处理设备执行以下任何处理的步骤。As shown in FIG. 4 , the memory 140 may store a monocular image analysis module 402 , a stereoscopic image analysis module 404 , a velocity and acceleration module 406 , and a navigation response module 408 . The disclosed embodiments are not limited to any particular configuration of memory 140 . Also, the application processor 180 and/or the image processor 190 may execute instructions stored in any one of the modules 402 , 404 , 406 , and 408 included in the memory 140 . Those skilled in the art will appreciate that references to processing unit 110 in the following discussion may refer to application processor 180 and image processor 190 individually or collectively. Accordingly, steps of any of the following processes may be performed by one or more processing devices.

在一实施例中,单眼图像分析模块402可以存储指令(比如计算机视觉软件),该指令在由处理单元110执行时对由图像捕获设备122、124和126之一获取的一组图像进行单眼图像分析。在一些实施例中,处理单元110可以将来自一组图像的信息与附加的感官信息(例如来自雷达、激光雷达等的信息)组合以执行单眼图像分析。如下面结合图5A-5D所述,单眼图像分析模块402可以包括用于检测该组图像内的一组特征的指令,比如车道标记、车辆、行人、道路标志、高速公路出口坡道、交通信号灯、危险对象以及与车辆环境相关的任何其他特征。基于该分析,系统100(例如经由处理单元110)可以引起车辆200中的一个或多个导航响应,比如转弯、车道偏移、加速度的变化等,如下面结合导航响应模块408所述。In an embodiment, monocular image analysis module 402 may store instructions (such as computer vision software) that, when executed by processing unit 110 , perform monocular analysis on a set of images acquired by one of image capture devices 122 , 124 , and 126 . image analysis. In some embodiments, processing unit 110 may combine information from a set of images with additional sensory information (eg, information from radar, lidar, etc.) to perform monocular image analysis. As described below in connection with FIGS. 5A-5D , monocular image analysis module 402 may include instructions for detecting a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic Signal lights, hazardous objects, and any other features relevant to the vehicle's environment. Based on this analysis, system 100 (eg, via processing unit 110 ) may induce one or more navigational responses in vehicle 200 , such as turns, lane departures, changes in acceleration, etc., as described below in connection with navigational response module 408 .

在一实施例中,立体图像分析模块404可以存储指令(比如计算机视觉软件),该指令在由处理单元110执行时对由从任何图像捕获设备122、124和126中选择的图像捕获设备的组合获取的第一和第二组图像进行立体图像分析。在一些实施例中,处理单元110可以将来自第一和第二组图像的信息与附加的感官信息(例如来自雷达的信息)组合以执行立体图像分析。例如,立体图像分析模块404可以包括用于基于由图像捕获设备124获取的第一组图像和由图像捕获设备126获取的第二组图像来执行立体图像分析的指令。如下面结合图6所述,立体图像分析模块404可以包括用于检测第一和第二组图像内的一组特征的指令,比如车道标记、车辆、行人、道路标志、高速公路出口坡道、交通信号灯、危险对象等。基于该分析,处理单元110可以在车辆200中引起一个或多个导航响应,比如转弯、车道偏移、加速度的变化等,如下面结合导航响应模块408所述。此外,在一些实施例中,立体图像分析模块404可以实施与受过训练的系统(比如神经网络或深层神经网络)或未经训练的系统(比如可以配置成使用计算机视觉算法来检测和/或标记从中捕获并处理感官信息的环境中的对象的系统)相关的技术。在一实施例中,立体图像分析模块404和/或其他图像处理模块可以配置成使用受过训练和未经训练的系统的组合。In one embodiment, the stereoscopic image analysis module 404 may store instructions (such as computer vision software) that when executed by the processing unit 110 analyze the combined image capture devices selected from any of the image capture devices 122 , 124 , and 126 . The acquired first and second sets of images are subjected to stereoscopic image analysis. In some embodiments, the processing unit 110 may combine information from the first and second sets of images with additional sensory information (eg, information from radar) to perform stereoscopic image analysis. For example, stereoscopic image analysis module 404 may include instructions for performing stereoscopic image analysis based on the first set of images acquired by image capture device 124 and the second set of images acquired by image capture device 126 . As described below in connection with FIG. 6, stereo image analysis module 404 may include instructions for detecting a set of features within the first and second sets of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps , traffic lights, dangerous objects, etc. Based on this analysis, the processing unit 110 may induce one or more navigational responses in the vehicle 200 , such as turns, lane departures, changes in acceleration, etc., as described below in connection with the navigational response module 408 . Additionally, in some embodiments, the stereoscopic image analysis module 404 may be implemented with a trained system (such as a neural network or a deep neural network) or an untrained system (such as may be configured to use computer vision algorithms to detect and/or label systems from which sensory information is captured and processed) related to objects in the environment). In an embodiment, the stereoscopic image analysis module 404 and/or other image processing modules may be configured to use a combination of trained and untrained systems.

在一实施例中,速度和加速度模块406可以存储软件,该软件配置成分析从车辆200中的一个或多个计算和机电设备接收的数据,该计算和机电设备配置成引起车辆200的速度和/或加速度的改变。例如,处理单元110可以执行与速度和加速度模块406相关的指令,以基于从单眼图像分析模块402和/或立体图像分析模块404的执行导出的数据来计算车辆200的目标速度。这种数据可以包括例如目标位置、速度和/或加速度、车辆200相对于附近车辆、行人或道路对象的位置和/或速度、车辆200相对于道路的车道标记的位置信息等。另外,处理单元110可基于感官输入(例如来自雷达的信息)和来自车辆200的其他系统(比如车辆200的节流系统220、制动系统230和/或转向系统240)的输入来计算车辆200的目标速度。基于计算出的目标速度,处理单元110可以将电子信号传输至车辆200的节流系统220、制动系统230和/或转向系统240,以通过例如物理地踩下制动器或放松车辆200的加速器来触发速度和/或加速度的变化。In an embodiment, the velocity and acceleration module 406 may store software configured to analyze data received from one or more computing and electromechanical devices in the vehicle 200 configured to cause the velocity and acceleration of the vehicle 200 to and/or changes in acceleration. For example, processing unit 110 may execute instructions associated with velocity and acceleration module 406 to calculate a target velocity of vehicle 200 based on data derived from execution of monocular image analysis module 402 and/or stereo image analysis module 404 . Such data may include, for example, target position, velocity and/or acceleration, position and/or velocity of vehicle 200 relative to nearby vehicles, pedestrians or road objects, position information of vehicle 200 relative to lane markings of the road, and the like. Additionally, the processing unit 110 may calculate the vehicle 200 based on sensory input (eg, information from radar) and input from other systems of the vehicle 200 (eg, the throttle system 220 , the braking system 230 and/or the steering system 240 of the vehicle 200 ). target speed. Based on the calculated target speed, the processing unit 110 may transmit electronic signals to the throttle system 220, the braking system 230 and/or the steering system 240 of the vehicle 200 to respond by, for example, physically applying the brakes or releasing the accelerator of the vehicle 200. Trigger changes in velocity and/or acceleration.

在一实施例中,导航响应模块408可以存储可由处理单元110执行的软件,以基于从单眼图像分析模块402和/或立体图像分析模块404的执行导出的数据来确定期望的导航响应。这种数据可以包括与附近的车辆、行人和道路对象相关的位置和速度信息、车辆200的目标位置信息等。另外,在一些实施例中,导航响应可以(部分地或全部地)基于地图数据、车辆200的预定位置和/或从单眼图像分析模块402和/或立体图像分析模块404的执行中检测到的车辆200与一个或多个对象之间的相对速度或相对加速度。导航响应模块408还可基于感官输入(例如来自雷达的信息)和来自车辆200的其他系统(比如车辆200的节流系统220、制动系统230和/或转向系统240)的输入来确定期望的导航响应。基于期望的导航响应,处理单元110可以将电子信号传输到车辆200的节流系统220、制动系统230和转向系统240,以例如通过转动车辆200的方向盘来触发期望的导航响应从而实现预定角度的旋转。在一些实施例中,处理单元110可以使用导航响应模块408的输出(例如期望的导航响应)作为速度和加速度模块406的执行的输入,以计算车辆200的速度变化。In an embodiment, navigation response module 408 may store software executable by processing unit 110 to determine a desired navigation response based on data derived from execution of monocular image analysis module 402 and/or stereo image analysis module 404 . Such data may include position and velocity information related to nearby vehicles, pedestrians and road objects, target position information of the vehicle 200, and the like. Additionally, in some embodiments, the navigation response may be based (in part or in whole) on map data, a predetermined location of the vehicle 200, and/or detected from the execution of the monocular image analysis module 402 and/or the stereo image analysis module 404 The relative velocity or relative acceleration between the vehicle 200 and one or more objects. The navigation response module 408 may also determine desired navigational responses based on sensory input (eg, information from radar) and input from other systems of the vehicle 200 (such as the throttle system 220, the braking system 230, and/or the steering system 240 of the vehicle 200). Navigation responsive. Based on the desired navigational response, the processing unit 110 may transmit electronic signals to the throttle system 220, the braking system 230 and the steering system 240 of the vehicle 200 to trigger the desired navigational response, for example by turning the steering wheel of the vehicle 200 to achieve a predetermined angle rotation. In some embodiments, the processing unit 110 may use the output of the navigation response module 408 (eg, the desired navigation response) as input to the execution of the speed and acceleration module 406 to calculate the speed change of the vehicle 200 .

此外,本文公开的任何模块(例如模块402、404和406)都可以实施与受过训练的系统(比如神经网络或深度神经网络)或未经训练的系统相关的技术。Furthermore, any of the modules disclosed herein (eg, modules 402, 404, and 406) can implement techniques related to a trained system (such as a neural network or deep neural network) or an untrained system.

图5A是示出了与所公开的实施例一致的用于基于单眼图像分析来引起一个或多个导航响应的示例性过程500A的流程图。在步骤510,处理单元110可以经由处理单元110和图像获取单元120之间的数据接口128来接收多个图像。例如,图像获取单元120中包括的相机(比如具有视场202的图像捕获设备122)可以捕获车辆200的前方(例如至车辆的侧面或后方)的区域的多个图像,并通过数据连接(例如数字、有线、USB、无线、蓝牙等)将它们传输至处理单元110。在步骤520,处理单元110可以执行单眼图像分析模块402以分析多个图像,如下面结合图5B-5D更详细地描述。通过执行该分析,处理单元110可以检测该组图像内的一组特征,比如车道标记、车辆、行人、道路标志、高速公路出口坡道、交通信号灯等。FIG. 5A is a flowchart illustrating an example process 500A for eliciting one or more navigational responses based on monocular image analysis, consistent with disclosed embodiments. At step 510 , the processing unit 110 may receive a plurality of images via the data interface 128 between the processing unit 110 and the image acquisition unit 120 . For example, a camera included in image acquisition unit 120 (such as image capture device 122 having field of view 202 ) can capture multiple images of an area in front of vehicle 200 (eg, to the sides or rear of the vehicle) and transmit them via a data connection (eg, digital, wired, USB, wireless, bluetooth, etc.) to the processing unit 110. At step 520, the processing unit 110 may execute the monocular image analysis module 402 to analyze the plurality of images, as described in more detail below in connection with FIGS. 5B-5D. By performing this analysis, processing unit 110 may detect a set of features within the set of images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, and the like.

在步骤520,处理单元110还可以执行单眼图像分析模块402以检测各种道路危险,例如卡车轮胎的零件、掉落的道路标志、松散的货物、小动物等。道路危险的结构、形状、大小和颜色可能有所不同,这可能会使检测这种危险更具挑战性。在一些实施例中,处理单元110可以执行单眼图像分析模块402以对多个图像执行多帧分析来检测道路危险。例如,处理单元110可以估计连续图像帧之间的相机运动,并计算帧之间的像素的视差以构造道路的3D地图。然后,处理单元110可以使用3D地图来检测道路表面以及道路表面上存在的危险。In step 520 , the processing unit 110 may also execute the monocular image analysis module 402 to detect various road hazards, such as parts of truck tires, fallen road signs, loose goods, small animals, and the like. Road hazards can vary in structure, shape, size, and color, which can make detecting such hazards more challenging. In some embodiments, the processing unit 110 may execute the monocular image analysis module 402 to perform multi-frame analysis on multiple images to detect road hazards. For example, the processing unit 110 may estimate camera motion between consecutive image frames and calculate the disparity of pixels between frames to construct a 3D map of the road. The processing unit 110 may then use the 3D map to detect the road surface and hazards present on the road surface.

在步骤530,处理单元110可以执行导航响应模块408,以基于在步骤520执行的分析和以上结合图4描述的技术来在车辆200中引起一个或多个导航响应。导航响应可以包括例如转弯、车道偏移、加速度的变化等。在一些实施例中,处理单元110可以使用从速度和加速度模块406的执行中导出的数据来引起一个或多个导航响应。另外,多个导航响应可以同时、顺序或其任意组合发生。例如,处理单元110可以使车辆200移过一个车道,然后通过例如将控制信号顺序地传输至车辆200的转向系统240和节流系统220而加速。可替代地,处理单元110可以使车辆200制动,同时通过例如将控制信号同时传输至车辆200的制动系统230和转向系统240来改变车道。At step 530 , the processing unit 110 may execute the navigation response module 408 to induce one or more navigation responses in the vehicle 200 based on the analysis performed at step 520 and the techniques described above in connection with FIG. 4 . Navigation responses may include, for example, turns, lane departures, changes in acceleration, and the like. In some embodiments, processing unit 110 may use data derived from execution of velocity and acceleration module 406 to induce one or more navigational responses. Additionally, multiple navigation responses can occur simultaneously, sequentially, or any combination thereof. For example, the processing unit 110 may cause the vehicle 200 to move across a lane and then be accelerated by, for example, sequentially transmitting control signals to the steering system 240 and the throttle system 220 of the vehicle 200 . Alternatively, the processing unit 110 may brake the vehicle 200 while changing lanes by, for example, simultaneously transmitting control signals to the braking system 230 and the steering system 240 of the vehicle 200 .

图5B是示出了与所公开的实施例一致的用于在一组图像中检测一个或多个车辆和/或行人的示例性过程500B的流程图。处理单元110可以执行单眼图像分析模块402以实施过程500B。在步骤540,处理单元110可以确定代表可能的车辆和/或行人的一组候选对象。例如,处理单元110可以扫描一个或多个图像,将图像与一个或多个预定模式进行比较,并且在每个图像内识别可包含感兴趣对象(例如车辆、行人或其部分)的可能位置。预定模式可以设计成实现高“错误命中”率和低“丢失”率。例如,处理单元110可以使用与预定模式相似的低阈值来将候选对象识别为可能的车辆或行人。这样做可以允许处理单元110减少丢失(例如不识别)代表车辆或行人的候选对象的可能性。FIG. 5B is a flowchart illustrating an example process 500B for detecting one or more vehicles and/or pedestrians in a set of images, consistent with the disclosed embodiments. Processing unit 110 may execute monocular image analysis module 402 to implement process 500B. At step 540, the processing unit 110 may determine a set of candidate objects representing possible vehicles and/or pedestrians. For example, processing unit 110 may scan one or more images, compare the images to one or more predetermined patterns, and identify within each image a possible location that may contain an object of interest such as a vehicle, pedestrian, or part thereof. The predetermined pattern can be designed to achieve a high "false hit" rate and a low "miss" rate. For example, the processing unit 110 may use a low threshold similar to a predetermined pattern to identify candidate objects as possible vehicles or pedestrians. Doing so may allow the processing unit 110 to reduce the likelihood of missing (eg, not recognizing) candidates representing vehicles or pedestrians.

在步骤542,处理单元110可以基于分类标准来过滤该组候选对象以排除某些候选者(例如不相关或不太相关的对象)。这种标准可以从与存储在数据库(例如存储在存储器140中的数据库)中的对象类型相关的各种属性中导出。属性可以包括对象的形状、尺寸、纹理、位置(例如相对于车辆200)等。因此,处理单元110可使用一组或多组标准以拒绝来自该组候选对象的虚假候选者。At step 542, the processing unit 110 may filter the set of candidate objects based on the classification criteria to exclude certain candidates (eg, irrelevant or less relevant objects). Such criteria may be derived from various attributes associated with object types stored in a database (eg, the database stored in memory 140). Attributes may include the object's shape, size, texture, location (eg, relative to vehicle 200 ), and the like. Accordingly, processing unit 110 may use one or more sets of criteria to reject false candidates from the set of candidates.

在步骤544,处理单元110可以分析图像的多个帧以确定该组候选对象中的对象是否代表车辆和/或行人。例如,处理单元110可以在连续帧上跟踪检测到的候选对象,并累积与检测到的对象相关的逐帧数据(例如尺寸、相对于车辆200的位置等)。另外,处理单元110可以估计检测到的对象的参数,并将该对象的逐帧位置数据与预测位置进行比较。In step 544, the processing unit 110 may analyze the plurality of frames of the image to determine whether the objects in the set of candidate objects represent vehicles and/or pedestrians. For example, the processing unit 110 may track a detected candidate object over successive frames and accumulate frame-by-frame data (eg, size, position relative to the vehicle 200 , etc.) related to the detected object. Additionally, the processing unit 110 may estimate parameters of a detected object and compare the object's frame-by-frame position data with the predicted position.

在步骤546,处理单元110可以构造用于检测到的对象的一组测量。例如,这种测量可以包括与检测到的对象相关的位置、速度和加速度值(相对于车辆200)。在一些实施例中,处理单元110可以基于使用一系列基于时间的观测的估计技术比如卡尔曼滤波或线性二次估计(LQE)和/或基于用于不同对象类型(例如汽车、卡车、行人、自行车、道路标志等)的可用建模数据来构造测量。卡尔曼滤波可以基于对象尺度的测量,其中尺度测量与碰撞时间(例如车辆200到达对象的时间量)成比例。因此,通过执行步骤540-546,处理单元110可以识别出现在该组捕获图像内的车辆和行人,并导出与车辆和行人相关的信息(例如位置、速度、大小)。基于识别和所导出的信息,处理单元110可以在车辆200中引起一个或多个导航响应,如上面结合图5A所述。In step 546, the processing unit 110 may construct a set of measurements for the detected object. For example, such measurements may include position, velocity, and acceleration values (relative to vehicle 200 ) associated with detected objects. In some embodiments, the processing unit 110 may be based on estimation techniques such as Kalman filtering or linear quadratic estimation (LQE) using a series of time-based observations and/or based on the available modeling data for bicycles, road signs, etc.) to construct measurements. Kalman filtering may be based on a measure of object scale, where the scale measure is proportional to the time to collision (eg, the amount of time it took for the vehicle 200 to reach the object). Thus, by performing steps 540-546, the processing unit 110 can identify vehicles and pedestrians appearing within the set of captured images, and derive information (eg, position, velocity, size) related to the vehicles and pedestrians. Based on the identification and derived information, processing unit 110 may induce one or more navigational responses in vehicle 200, as described above in connection with FIG. 5A.

在步骤548,处理单元110可以执行一个或多个图像的光流分析,以减少检测到“错误命中”并丢失代表车辆或行人的候选对象的可能性。例如,光流分析可以指在与其他车辆和行人相关的一个或多个图像中分析相对于车辆200的运动模式,其不同于道路表面运动。处理单元110可以通过观察在不同时间捕获的多个图像帧上的对象的不同位置来计算候选对象的运动。处理单元110可以将位置和时间值用作数学模型的输入,以计算候选对象的运动。因此,光流分析可以提供检测车辆200附近的车辆和行人的另一种方法。处理单元110可以结合步骤540-546执行光流分析,以提供用于检测车辆和行人的冗余并增加系统100的可靠性。At step 548, the processing unit 110 may perform an optical flow analysis of one or more images to reduce the likelihood of detecting a "false hit" and missing a candidate object representing a vehicle or pedestrian. For example, optical flow analysis may refer to analyzing motion patterns relative to the vehicle 200 in one or more images related to other vehicles and pedestrians, which are distinct from road surface motion. The processing unit 110 may calculate the motion of the candidate object by observing different positions of the object on a plurality of image frames captured at different times. The processing unit 110 may use the position and time values as input to a mathematical model to calculate the motion of the candidate object. Therefore, optical flow analysis may provide another method of detecting vehicles and pedestrians near the vehicle 200 . The processing unit 110 may perform optical flow analysis in conjunction with steps 540 - 546 to provide redundancy for detecting vehicles and pedestrians and increase the reliability of the system 100 .

图5C是示出了与所公开的实施例一致的用于在一组图像中检测道路标记和/或车道几何形状信息的示例性过程500C的流程图。处理单元110可以执行单眼图像分析模块402以实施过程500C。在步骤550,处理单元110可以通过扫描一个或多个图像来检测一组对象。为了检测车道标记、车道几何形状信息及其他相关道路标记的片段,处理单元110可以过滤该组对象以排除那些被确定为不相关的对象(例如小坑洼、小石头等)。在步骤552,处理单元110可以将在步骤550中检测到的属于相同道路标记或车道标记的片段分组在一起。基于该分组,处理单元110可以开发表示检测到的片段的模型,比如数学模型。FIG. 5C is a flowchart illustrating an exemplary process 500C for detecting road markings and/or lane geometry information in a set of images, consistent with the disclosed embodiments. Processing unit 110 may execute monocular image analysis module 402 to implement process 500C. At step 550, the processing unit 110 may detect a set of objects by scanning the one or more images. To detect segments of lane markings, lane geometry information, and other relevant road markings, processing unit 110 may filter the set of objects to exclude those determined to be irrelevant (eg, small potholes, small rocks, etc.). In step 552, the processing unit 110 may group together the segments detected in step 550 belonging to the same road marking or lane marking. Based on this grouping, the processing unit 110 may develop a model, such as a mathematical model, representing the detected segments.

在步骤554,处理单元110可以构造与检测到的片段相关的一组测量。在一些实施例中,处理单元110可以创建检测到的片段从图像平面到真实世界平面上的投影。可以使用具有与物理特性比如检测到的道路的位置、斜率、曲率及曲率导数相对应的系数的三次多项式来表征投影。在生成投影时,处理单元110可以考虑道路表面的变化以及与车辆200相关的俯仰和侧倾率。此外,处理单元110可以通过分析在道路表面上存在的位置和运动提示来对道路高程建模。此外,处理单元110可以通过跟踪一个或多个图像中的一组特征点来估计与车辆200相关的俯仰和侧倾率。At step 554, the processing unit 110 may construct a set of measurements related to the detected segments. In some embodiments, the processing unit 110 may create a projection of the detected segment from the image plane onto the real world plane. The projection can be characterized using a cubic polynomial with coefficients corresponding to physical properties such as the detected road's position, slope, curvature and curvature derivative. When generating the projections, the processing unit 110 may take into account changes in the road surface as well as pitch and roll rates associated with the vehicle 200 . Furthermore, the processing unit 110 may model the road elevation by analyzing the positional and motion cues present on the road surface. Additionally, the processing unit 110 may estimate pitch and roll rates associated with the vehicle 200 by tracking a set of feature points in one or more images.

在步骤556,处理单元110可以通过例如跟踪连续图像帧上的检测到的片段并累积与检测到的片段相关的逐帧数据来执行多帧分析。随着处理单元110执行多帧分析,在步骤554处构造的一组测量可以变得更加可靠并且与越来越高的置信度水平相关。因此,通过执行步骤550、552、554和556,处理单元110可以识别出现在该组所捕获的图像内的道路标记并导出车道几何形状信息。基于识别和所导出的信息,处理单元110可以在车辆200中引起一个或多个导航响应,如上面结合图5A所述。In step 556, the processing unit 110 may perform multi-frame analysis by, for example, tracking the detected segments over successive image frames and accumulating frame-by-frame data related to the detected segments. As the processing unit 110 performs multi-frame analysis, the set of measurements constructed at step 554 may become more reliable and correlated with higher and higher confidence levels. Thus, by performing steps 550, 552, 554 and 556, the processing unit 110 may identify road markings appearing within the set of captured images and derive lane geometry information. Based on the identification and derived information, processing unit 110 may induce one or more navigational responses in vehicle 200, as described above in connection with FIG. 5A.

在步骤558,处理单元110可以考虑附加的信息源,以进一步开发车辆200在其周围环境的情况下的安全模型。处理单元110可以使用安全模型来定义系统100可以以安全的方式执行对车辆200的自主控制的情况。为了开发安全模型,在一些实施例中,处理单元110可以考虑其他车辆的位置和运动、检测到的道路边缘和障碍和/或从地图数据(比如来自地图数据库160的数据)提取的一般道路形状描述。通过考虑附加的信息源,处理单元110可以提供用于检测道路标记和车道几何形状的冗余并增加系统100的可靠性。In step 558, the processing unit 110 may consider additional sources of information to further develop a safety model of the vehicle 200 in its surroundings. The processing unit 110 may use the safety model to define situations in which the system 100 may perform autonomous control of the vehicle 200 in a safe manner. To develop the safety model, in some embodiments, the processing unit 110 may take into account the position and motion of other vehicles, detected road edges and obstacles, and/or the general road shape extracted from map data, such as data from the map database 160 describe. By taking into account additional sources of information, the processing unit 110 may provide redundancy for detecting road markings and lane geometry and increase the reliability of the system 100 .

图5D是示出与所公开的实施例一致的用于检测一组图像中的交通信号灯的示例性过程500D的流程图。处理单元110可以执行单眼图像分析模块402以实施过程500D。在步骤560,处理单元110可以扫描该组图像并识别出现在图像中可能包含交通信号灯的位置处的对象。例如,处理单元110可以过滤所识别的对象以构造一组候选对象,排除那些不太可能对应于交通信号灯的对象。可以基于与交通信号灯相关的各种属性来进行过滤,比如形状、尺寸、纹理、位置(例如相对于车辆200)等。这种属性可以基于交通信号灯和交通控制信号的多个示例并且存储在数据库中。在一些实施例中,处理单元110可以对反映可能的交通信号灯的该组候选对象执行多帧分析。例如,处理单元110可以在连续的图像帧上跟踪候选对象,估计候选对象的真实世界位置,并且滤出那些正在移动的对象(其不可能是交通信号灯)。在一些实施例中,处理单元110可以对候选对象执行颜色分析并且识别出现在可能的交通信号灯内部的检测到的颜色的相对位置。FIG. 5D is a flowchart illustrating an exemplary process 500D for detecting traffic lights in a set of images, consistent with the disclosed embodiments. Processing unit 110 may execute monocular image analysis module 402 to implement process 500D. At step 560, the processing unit 110 may scan the set of images and identify objects that appear in the images at locations that may contain traffic lights. For example, the processing unit 110 may filter the identified objects to construct a set of candidate objects, excluding those objects that are less likely to correspond to traffic lights. Filtering may be based on various attributes related to traffic lights, such as shape, size, texture, location (eg, relative to vehicle 200 ), and the like. Such attributes may be based on multiple instances of traffic lights and traffic control signals and stored in a database. In some embodiments, the processing unit 110 may perform a multi-frame analysis on the set of candidate objects reflecting possible traffic lights. For example, the processing unit 110 may track candidate objects over successive image frames, estimate the real world positions of the candidate objects, and filter out those objects that are moving (which are unlikely to be traffic lights). In some embodiments, the processing unit 110 may perform a color analysis on the candidate objects and identify the relative position of the detected color that occurs inside the likely traffic signal.

在步骤562,处理单元110可以分析路口的几何形状。该分析可以基于以下任意组合:(i)在车辆200的任一侧检测到的车道数量,(ii)在道路上检测到的标记(比如箭头标记),以及(iii)从地图数据(比如来自地图数据库160的数据)中提取的路口描述。处理单元110可以使用从单眼分析模块402的执行中导出的信息来进行分析。此外,处理单元110可以确定在步骤560检测到的交通信号灯与出现在车辆200附近的车道之间的对应关系。At step 562, the processing unit 110 may analyze the geometry of the intersection. The analysis can be based on any combination of: (i) the number of lanes detected on either side of the vehicle 200, (ii) markers detected on the road (such as arrow markers), and (iii) data from map data (such as from The intersection description extracted from the data of the map database 160). The processing unit 110 may use information derived from the execution of the monocular analysis module 402 for the analysis. Furthermore, the processing unit 110 may determine a correspondence between traffic lights detected at step 560 and lanes present near the vehicle 200 .

在步骤564,当车辆200接近路口时,处理单元110可以更新与所分析的路口几何形状和检测到的交通信号灯相关的置信度。例如,估计出现在路口的交通信号灯的数量与实际出现在路口的数量相比可能会影响置信度。因此,基于置信度,处理单元110可以将控制委托给车辆200的驾驶员,以便改善安全状况。通过执行步骤560、562和564,处理单元110可以识别出现在该组捕获图像内的交通信号灯并分析路口几何形状信息。基于识别和分析,处理单元110可引起车辆200中的一个或多个导航响应,如上面结合图5A所述。At step 564, the processing unit 110 may update the confidence scores associated with the analyzed intersection geometry and detected traffic lights as the vehicle 200 approaches the intersection. For example, the estimated number of traffic lights present at an intersection may affect the confidence level compared to the actual number present at the intersection. Thus, based on the degree of confidence, the processing unit 110 may delegate control to the driver of the vehicle 200 in order to improve the safety situation. By performing steps 560, 562, and 564, processing unit 110 may identify traffic lights appearing within the set of captured images and analyze intersection geometry information. Based on the identification and analysis, the processing unit 110 may cause one or more navigational responses in the vehicle 200, as described above in connection with FIG. 5A.

图5E是示出了与所公开的实施例一致的用于基于车辆路径在车辆200中引起一个或多个导航响应的示例性过程500E的流程图。在步骤570,处理单元110可以构造与车辆200相关的初始车辆路径。可以使用以坐标(x,z)表示的一组点来表示车辆路径,并且该组点中的两个点之间的距离di可能会落在1至5米的范围内。在一实施例中,处理单元110可以使用两个多项式比如左右道路多项式来构造初始车辆路径。处理单元110可以计算两个多项式之间的几何中点,并且将合成车辆路径中包括的每个点偏移预定的偏移量(例如智能车道偏移量)(如果有的话)(零偏移量可对应于在车道中间行驶)。偏移可以在垂直于车辆路径中任意两点之间的片段的方向上。在另一实施例中,处理单元110可以使用一个多项式和估计的车道宽度来将车辆路径的每个点偏移估计的车道宽度加上预定的偏移量(例如智能车道偏移量)的一半。FIG. 5E is a flowchart illustrating an example process 500E for eliciting one or more navigational responses in a vehicle 200 based on a vehicle path, consistent with disclosed embodiments. At step 570 , the processing unit 110 may construct an initial vehicle route associated with the vehicle 200 . The vehicle path may be represented using a set of points expressed in coordinates (x,z), and the distance di between two points in the set may fall in the range of 1 to 5 meters. In an embodiment, the processing unit 110 may use two polynomials, such as left and right road polynomials, to construct the initial vehicle path. The processing unit 110 may calculate the geometric midpoint between the two polynomials and offset each point included in the resultant vehicle path by a predetermined offset (e.g., smart lane offset), if any (zero offset The amount of displacement may correspond to driving in the middle of the lane). The offset can be in a direction perpendicular to the segment between any two points in the vehicle's path. In another embodiment, the processing unit 110 may use a polynomial and the estimated lane width to offset each point of the vehicle's path by the estimated lane width plus half a predetermined offset (e.g., smart lane offset) .

在步骤572,处理单元110可以更新在步骤570构造的车辆路径。处理单元110可以使用更高的分辨率来重构在步骤570构造的车辆路径,使得代表车辆路径的该组点中的两个点之间的距离dk小于上述距离di。例如,距离dk可以落入0.1至0.3米的范围内。处理单元110可以使用抛物线样条算法来重构车辆路径,该算法可以产生与车辆路径的总长度相对应的累积距离矢量S(即基于代表车辆路径的该组点)。At step 572 , the processing unit 110 may update the vehicle route constructed at step 570 . The processing unit 110 may use a higher resolution to reconstruct the vehicle path constructed at step 570 such that the distance dk between two points in the set of points representing the vehicle path is smaller than the aforementioned distance di. For example, the distance dk may fall within the range of 0.1 to 0.3 meters. The processing unit 110 may reconstruct the vehicle path using a parabolic spline algorithm, which may generate a cumulative distance vector S corresponding to the total length of the vehicle path (ie based on the set of points representing the vehicle path).

在步骤574,处理单元110可基于在步骤572构造的更新后的车辆路径来确定前视点(以坐标表示为(xl,zl))。处理单元110可从累积距离矢量S中提取前视点,并且前视点可以与前视距离和前视时间相关。前视距离(其下限的范围可以在10至20米之间)可被计算为车辆200的速度与前视时间的乘积。例如,随着车辆200的速度减小,前视距离也可以减小(例如直至其到达下限)。前视时间(其可以在0.5至1.5秒的范围内)可以与与引起车辆200中的导航响应相关的一个或多个控制回路(比如航向误差跟踪控制回路)的增益成反比。例如,航向误差跟踪控制回路的增益可以取决于偏航率回路、转向致动器回路、汽车横向动力学等的带宽。因此,航向误差跟踪控制回路的增益越高,前视时间越短。At step 574 , the processing unit 110 may determine a forward-sight point (expressed as (xl,zl) in coordinates) based on the updated vehicle path constructed at step 572 . The processing unit 110 may extract the foresight point from the cumulative distance vector S, and the foresight point may be related to the foresight distance and the foresight time. The look-ahead distance, the lower limit of which may range between 10 and 20 meters, may be calculated as the product of the vehicle 200 speed and the look-ahead time. For example, as the speed of the vehicle 200 decreases, the look-ahead distance may also decrease (eg, until it reaches a lower limit). The look-ahead time, which may be in the range of 0.5 to 1.5 seconds, may be inversely proportional to the gain of one or more control loops related to causing a navigation response in the vehicle 200 , such as a heading error tracking control loop. For example, the gain of the heading error tracking control loop may depend on the bandwidth of the yaw rate loop, steering actuator loop, vehicle lateral dynamics, etc. Therefore, the higher the gain of the heading error tracking control loop, the shorter the look-ahead time.

在步骤576,处理单元110可以基于在步骤574确定的前视点来确定航向误差和偏航率命令。处理单元110可以通过计算前视点的反正切例如arctan(xl/zl)来确定航向误差。处理单元110可以将偏航率命令确定为航向误差和高级控制增益的乘积。如果前视距离不在下限,则高级控制增益可以等于:(2/前视时间)。否则,高级控制增益可能等于:(2*车辆速度200/前视距离)。At step 576 , the processing unit 110 may determine a heading error and a yaw rate command based on the foresight point determined at step 574 . The processing unit 110 may determine the heading error by calculating the arc tangent of the foresight point, eg arctan(x l /z l ). The processing unit 110 may determine the yaw rate command as the product of the heading error and the advanced control gain. If the look-ahead distance is not at the lower limit, the advanced control gain can be equal to: (2/look-ahead time). Otherwise, the advanced control gain may be equal to: (2*vehicle speed 200/look ahead distance).

图5F是示出了与所公开的实施例一致的用于确定领先车辆是否正在改变车道的示例性过程500F的流程图。在步骤580,处理单元110可以确定与领先车辆(例如在车辆200之前行驶的车辆)相关的导航信息。例如,处理单元110可以使用上面结合图5A和5B描述的技术来确定领先车辆的位置、速度(例如方向和速度)和/或加速度。处理单元110还可使用上面结合图5E所述的技术来确定一个或多个道路多项式、前视点(与车辆200相关)和/或蜗牛踪迹(例如描述领先车辆所经过的路径的一组点)。FIG. 5F is a flowchart illustrating an example process 500F for determining whether a lead vehicle is changing lanes, consistent with the disclosed embodiments. At step 580 , the processing unit 110 may determine navigation information related to a lead vehicle (eg, a vehicle traveling ahead of the vehicle 200 ). For example, processing unit 110 may determine the position, velocity (eg, direction and velocity), and/or acceleration of the lead vehicle using the techniques described above in connection with FIGS. 5A and 5B . Processing unit 110 may also use the techniques described above in connection with FIG. 5E to determine one or more road polynomials, forward sight points (associated with vehicle 200), and/or snail trails (e.g., a set of points describing the path traveled by the leading vehicle) .

在步骤582,处理单元110可以分析在步骤580确定的导航信息。在一实施例中,处理单元110可以计算蜗牛踪迹和道路多项式之间的距离(例如沿着该踪迹)。如果沿踪迹的该距离的变化超过预定阈值(例如在直行道路上为0.1至0.2米,在适度弯曲的道路上为0.3至0.4米,在有急转弯的道路上为0.5至0.6米),则处理单元110可以确定领先车辆可能正在改变车道。在检测到多个车辆行驶在车辆200之前的情况下,处理单元110可以比较与每个车辆相关的蜗牛踪迹。基于该比较,处理单元110可以确定蜗牛踪迹与其他车辆的蜗牛踪迹不匹配的车辆可能正在改变车道。处理单元110可以另外将蜗牛踪迹(与领先车辆相关)的曲率与领先车辆在其中行驶的道路段的预期曲率进行比较。可以从地图数据(例如来自地图数据库160的数据)、道路多项式、其他车辆的蜗牛踪迹、关于道路的先验知识等中提取预期曲率。如果蜗牛踪迹的曲率与道路段的预期曲率之差超过预定阈值,则处理单元110可以确定领先车辆可能正在改变车道。At step 582 , the processing unit 110 may analyze the navigation information determined at step 580 . In an embodiment, the processing unit 110 may calculate the distance between the snail trail and the road polynomial (eg along the trail). If this distance along the trail varies by more than a predetermined threshold (e.g. 0.1 to 0.2 meters on straight roads, 0.3 to 0.4 meters on moderately curved roads, 0.5 to 0.6 meters on roads with sharp turns), then The processing unit 110 may determine that the lead vehicle may be changing lanes. In the event that multiple vehicles are detected traveling in front of the vehicle 200, the processing unit 110 may compare the snail trails associated with each vehicle. Based on this comparison, the processing unit 110 may determine that a vehicle whose snail trail does not match those of other vehicles may be changing lanes. The processing unit 110 may additionally compare the curvature of the snail trail (associated with the lead vehicle) with the expected curvature of the road segment in which the lead vehicle is traveling. The expected curvature may be extracted from map data (eg, from map database 160 ), road polynomials, snail trails of other vehicles, prior knowledge about the road, and the like. If the difference between the curvature of the snail trail and the expected curvature of the road segment exceeds a predetermined threshold, the processing unit 110 may determine that the leading vehicle may be changing lanes.

在另一实施例中,处理单元110可以在特定时间段(例如0.5至1.5秒)内将领先车辆的瞬时位置与前视点(与车辆200相关)进行比较。如果领先车辆的瞬时位置和前视点之间的距离在特定时间段内发生变化,并且累积的变化总和超过预定阈值(例如在直行道路上为0.3至0.4米,在适度弯曲的道路上为0.7至0.8米,在有急转弯的道路上为1.3至1.7米),则处理单元110可以确定领先车辆可能正在改变车道。在另一实施例中,处理单元110可以通过将沿着踪迹行进的横向距离与蜗牛踪迹的预期曲率进行比较来分析蜗牛踪迹的几何形状。可以根据以下计算来确定预期曲率半径:(δz 2x 2)/2/(δx),其中δx代表行进的横向距离,δz代表行进的纵向距离。如果行进的横向距离与预期曲率之间的差超过预定阈值(例如500至700米),则处理单元110可以确定领先车辆可能正在改变车道。在另一实施例中,处理单元110可以分析领先车辆的位置。如果领先车辆的位置使道路多项式模糊(例如领先车辆覆盖在道路多项式的顶部),则处理单元110可以确定领先车辆可能正在改变车道。在领先车辆的位置使得在领先车辆的前方检测到另一车辆并且这两个车辆的蜗牛踪迹不平行的情况下,处理单元110可以确定(较近的)领先车辆可能正在改变车道。In another embodiment, the processing unit 110 may compare the instantaneous position of the leading vehicle with the forward sight point (related to the vehicle 200 ) within a certain period of time (eg, 0.5 to 1.5 seconds). If the distance between the instantaneous position of the leading vehicle and the forward-sight point changes within a certain period of time, and the sum of the cumulative changes exceeds a predetermined threshold (for example, 0.3 to 0.4 meters on a straight road, 0.7 to 0.7 meters on a moderately curved road 0.8 meters, 1.3 to 1.7 meters on a road with a sharp turn), the processing unit 110 may determine that the leading vehicle may be changing lanes. In another embodiment, the processing unit 110 may analyze the geometry of the snail trail by comparing the lateral distance traveled along the trail with the expected curvature of the snail trail. The expected radius of curvature can be determined according to the following calculation: (δ z 2x 2 )/2/(δ x ), where δ x represents the lateral distance traveled and δ z represents the longitudinal distance traveled. If the difference between the lateral distance traveled and the expected curvature exceeds a predetermined threshold (eg, 500 to 700 meters), the processing unit 110 may determine that the lead vehicle may be changing lanes. In another embodiment, the processing unit 110 may analyze the position of the leading vehicle. If the position of the lead vehicle obscures the road polynomial (eg, the lead vehicle is overlaid on top of the road polynomial), the processing unit 110 may determine that the lead vehicle may be changing lanes. Where the lead vehicle is positioned such that another vehicle is detected in front of the lead vehicle and the snail trails of the two vehicles are not parallel, the processing unit 110 may determine that the (closer) lead vehicle may be changing lanes.

在步骤584,处理单元110可以基于在步骤582执行的分析来确定领先车辆200是否正在改变车道。例如,处理单元110可以基于在步骤582执行的各个分析的加权平均值来进行确定。例如,在这种方案下,处理单元110基于特定类型分析做出的关于领先车辆可能正在改变车道的决定可被赋予值“1”(和“0”来表示确定领先车辆不太可能正在改变车道)。在步骤582执行的不同分析可被分配不同的权重,并且所公开的实施例不限于分析和权重的任何特定组合。At step 584 , the processing unit 110 may determine whether the lead vehicle 200 is changing lanes based on the analysis performed at step 582 . For example, processing unit 110 may make the determination based on a weighted average of the individual analyzes performed at step 582 . For example, under such an approach, a decision made by the processing unit 110 based on a particular type of analysis that the lead vehicle may be changing lanes may be given a value of "1" (and a "0" to indicate a determination that the lead vehicle is unlikely to be changing lanes ). Different analyzes performed at step 582 may be assigned different weights, and the disclosed embodiments are not limited to any particular combination of analyzes and weights.

图6是示出了与所公开的实施例一致的用于基于立体图像分析来引起一个或多个导航响应的示例性过程600的流程图。在步骤610,处理单元110可以经由数据接口128接收第一和第二多个图像。例如,图像获取单元120中包括的相机(比如具有视场202和204的图像捕获设备122和124)可以捕获车辆200的前方区域的第一和第二多个图像,并通过数字连接(例如USB、无线、蓝牙等)将它们传输到处理单元110。在一些实施例中,处理单元110可以经由两个或更多个数据接口接收第一和第二多个图像。所公开的实施例不限于任何特定的数据接口配置或协议。FIG. 6 is a flowchart illustrating an example process 600 for eliciting one or more navigational responses based on stereoscopic image analysis, consistent with disclosed embodiments. At step 610 , the processing unit 110 may receive the first and second plurality of images via the data interface 128 . For example, a camera included in image acquisition unit 120 , such as image capture devices 122 and 124 having fields of view 202 and 204 , can capture first and second plurality of images of the area in front of vehicle 200 and transmit them via a digital connection (eg, USB , wireless, Bluetooth, etc.) transmit them to the processing unit 110. In some embodiments, processing unit 110 may receive the first and second plurality of images via two or more data interfaces. The disclosed embodiments are not limited to any particular data interface configuration or protocol.

在步骤620,处理单元110可以执行立体图像分析模块404来对第一和第二多个图像执行立体图像分析,以创建车辆前方道路的3D地图并检测图像内的特征,比如车道标记、车辆、行人、道路标志、高速公路出口坡道、交通信号灯、道路危险等。立体图像分析可以以类似于结合上面图5A-5D描述的步骤的方式执行。例如,处理单元110可以执行立体图像分析模块404以检测第一和第二多个图像内的候选对象(例如车辆、行人、道路标记、交通信号灯、道路危险等),根据各个标准滤出子组候选对象,并且执行多帧分析、构造测量和确定其余候选对象的置信度。在执行上述步骤时,处理单元110可以考虑来自第一和第二多个图像的信息,而不是仅来自一组图像的信息。例如,处理单元110可以分析出现在第一和第二多个图像中的候选对象的像素级数据(或捕获的图像的两个流中的其他数据子集)的差异。作为另一示例,处理单元110可以通过观察对象出现在多个图像之一中而不是另一个或相对于可能相对于出现在两个图像流中的对象存在的其他差异来估计候选对象(例如相对于车辆200)的位置和/或速度。例如,可以基于与出现在图像流之一或两者中的对象相关的特征的轨迹、位置、运动特性等来确定相对于车辆200的位置、速度和/或加速度。At step 620, the processing unit 110 may execute the stereo image analysis module 404 to perform stereo image analysis on the first and second plurality of images to create a 3D map of the road ahead of the vehicle and detect features within the images, such as lane markings, vehicles, Pedestrians, road signs, highway exit ramps, traffic lights, road hazards, etc. Stereoscopic image analysis can be performed in a manner similar to the steps described above in connection with Figures 5A-5D. For example, processing unit 110 may execute stereoscopic image analysis module 404 to detect candidate objects (e.g., vehicles, pedestrians, road markings, traffic lights, road hazards, etc.) within the first and second plurality of images, filtering out subgroups according to respective criteria candidate objects, and perform multi-frame analysis, construct measurements, and determine confidence for the remaining candidates. In performing the steps described above, the processing unit 110 may take into account information from the first and second plurality of images, rather than information from only one set of images. For example, processing unit 110 may analyze differences in pixel-level data (or other subsets of data in the two streams of captured images) of candidate objects appearing in the first and second plurality of images. As another example, the processing unit 110 may estimate candidate objects by observing that an object appears in one of the multiple images but not the other or relative to other differences that may exist relative to objects appearing in the two image streams (e.g., relative position and/or velocity of the vehicle 200). For example, position, velocity and/or acceleration relative to vehicle 200 may be determined based on the trajectory, position, motion characteristics, etc. of features associated with objects appearing in one or both of the image streams.

在步骤630,处理单元110可执行导航响应模块408以基于在步骤620执行的分析和以上结合图4描述的技术来在车辆200中引起一个或多个导航响应。导航响应可以包括例如转弯、车道偏移、加速度的变化、速度的变化、制动等。在一些实施例中,处理单元110可以使用从速度和加速度模块406的执行中导出的数据来引起一个或多个导航响应。另外,多个导航响应可以同时、顺序或其任意组合发生。At step 630 , the processing unit 110 may execute the navigation response module 408 to induce one or more navigation responses in the vehicle 200 based on the analysis performed at step 620 and the techniques described above in connection with FIG. 4 . Navigational responses may include, for example, turns, lane departures, changes in acceleration, changes in speed, braking, and the like. In some embodiments, processing unit 110 may use data derived from execution of velocity and acceleration module 406 to induce one or more navigational responses. Additionally, multiple navigation responses can occur simultaneously, sequentially, or any combination thereof.

图7是示出了与所公开的实施例一致的用于基于对三组图像的分析来引起一个或多个导航响应的示例性过程700的流程图。在步骤710,处理单元110可以经由数据接口128接收第一、第二和第三多个图像。例如,图像获取单元120中包括的相机(比如具有视场202、204和206的图像捕获设备122、124和126)可以捕获车辆200的前方和/或至侧面的区域的第一、第二和第三多个图像,并通过数字连接(例如USB、无线、蓝牙等)将它们传输至处理单元110。在一些实施例中,处理单元110可以经由三个或更多个数据接口接收第一、第二和第三多个图像。例如,图像捕获设备122、124、126中的每一个可以具有用于将数据传送到处理单元110的相关数据接口。所公开的实施例不限于任何特定的数据接口配置或协议。FIG. 7 is a flowchart illustrating an exemplary process 700 for eliciting one or more navigational responses based on the analysis of three sets of images, consistent with the disclosed embodiments. At step 710 , the processing unit 110 may receive the first, second and third plurality of images via the data interface 128 . For example, a camera included in image acquisition unit 120 (such as image capture devices 122 , 124 , and 126 having fields of view 202 , 204 , and 206 ) may capture first, second, and a third plurality of images and transmit them to the processing unit 110 via a digital connection (eg, USB, wireless, Bluetooth, etc.). In some embodiments, processing unit 110 may receive the first, second and third plurality of images via three or more data interfaces. For example, each of the image capture devices 122 , 124 , 126 may have an associated data interface for transferring data to the processing unit 110 . The disclosed embodiments are not limited to any particular data interface configuration or protocol.

在步骤720,处理单元110可以分析第一、第二和第三多个图像以检测图像内的特征,比如车道标记、车辆、行人、道路标志、高速公路出口坡道、交通信号灯、道路危险等。分析可以以类似于上面结合图5A-5D和6描述的步骤的方式执行。例如,处理单元110可以对第一、第二和第三多个图像中的每一个执行单眼图像分析(例如通过单眼图像分析模块402的执行并且基于上面结合图5A-5D描述的步骤)。可替代地,处理单元110可以对第一和第二多个图像、第二和第三多个图像和/或第一和第三多个图像执行立体图像分析(例如通过立体图像分析模块404的执行并且基于以上结合图6描述的步骤)。与第一、第二和/或第三多个图像的分析相对应的经处理的信息可被组合。在一些实施例中,处理单元110可执行单眼和立体图像分析的组合。例如,处理单元110可以对第一多个图像执行单眼图像分析(例如通过单眼图像分析模块402的执行)并且对第二和第三多个图像执行立体图像分析(例如通过立体图像分析模块404的执行)。图像捕获设备122、124和126的配置(包括它们各自的位置和视场202、204和206)可能影响对第一、第二和第三多个图像进行的分析的类型。所公开的实施例不限于图像捕获设备122、124和126的特定配置或对第一、第二和第三多个图像进行的分析的类型。At step 720, the processing unit 110 may analyze the first, second, and third plurality of images to detect features within the images, such as lane markings, vehicles, pedestrians, road signs, highway exit ramps, traffic lights, road hazards, etc. . Analysis can be performed in a manner similar to the steps described above in connection with FIGS. 5A-5D and 6 . For example, processing unit 110 may perform monocular image analysis (eg, as performed by monocular image analysis module 402 and based on the steps described above in connection with FIGS. 5A-5D ) on each of the first, second, and third plurality of images. Alternatively, processing unit 110 may perform stereoscopic image analysis on the first and second plurality of images, the second and third plurality of images, and/or the first and third plurality of images (e.g., via the performed and based on the steps described above in connection with FIG. 6). Processed information corresponding to the analysis of the first, second and/or third plurality of images may be combined. In some embodiments, processing unit 110 may perform a combination of monocular and stereoscopic image analysis. For example, processing unit 110 may perform monocular image analysis on the first plurality of images (e.g., by performance of monocular image analysis module 402) and stereoscopic image analysis on the second and third plurality of images (e.g., by performance of stereo image analysis module 404). implement). The configuration of image capture devices 122, 124, and 126, including their respective positions and fields of view 202, 204, and 206, may affect the type of analysis performed on the first, second, and third plurality of images. The disclosed embodiments are not limited to a particular configuration of image capture devices 122, 124, and 126 or the type of analysis performed on the first, second, and third plurality of images.

在一些实施例中,处理单元110可以基于在步骤710和720获取和分析的图像对系统100执行测试。这种测试可以为图像捕获设备122、124和126的某些配置提供系统100的整体性能的指示。例如,处理单元110可以确定“错误命中”(例如系统100错误地确定车辆或行人存在的情况)和“丢失”的比例。In some embodiments, processing unit 110 may perform tests on system 100 based on the images acquired and analyzed at steps 710 and 720 . Such testing may provide an indication of the overall performance of system 100 for certain configurations of image capture devices 122 , 124 , and 126 . For example, the processing unit 110 may determine a "false hit" (eg, a situation where the system 100 incorrectly determines the presence of a vehicle or pedestrian) and a "miss" ratio.

在步骤730,处理单元110可以基于从第一、第二和第三多个图像中的两个导出的信息来在车辆200中引起一个或多个导航响应。第一、第二和第三多个图像中的两个的选择可以取决于各种因素,例如在多个图像中的每个中检测到的对象的数量、类型和大小。处理单元110还可以基于图像质量和分辨率、图像中反映的有效视场、捕获的帧的数量、一个或多个感兴趣的对象实际出现在帧中的程度(例如对象出现在其中的帧的百分比、出现在每个这种帧中的对象的比例等)等来进行选择。At step 730 , the processing unit 110 may induce one or more navigational responses in the vehicle 200 based on information derived from two of the first, second and third plurality of images. The selection of two of the first, second and third plurality of images may depend on various factors such as the number, type and size of objects detected in each of the plurality of images. The processing unit 110 may also be based on the image quality and resolution, the effective field of view reflected in the image, the number of frames captured, the extent to which one or more objects of interest are actually present in the frame (e.g., the number of frames in which the object appears) percentage, proportion of objects appearing in each such frame, etc.), etc.)

在一些实施例中,处理单元110可以通过确定从一个图像源导出的信息与从其他图像源导出的信息相一致的程度来选择从第一、第二和第三多个图像中的两个导出的信息。例如,处理单元110可以组合从图像捕获设备122、124和126中的每个导出的经处理的信息(无论是通过单眼分析、立体分析还是两者的任意组合),并且确定在从每个图像捕获设备122、124和126捕获的图像上一致的视觉指示符(例如车道标记、检测到的车辆及其位置和/或路径、检测到的交通信号灯等)。处理单元110还可以排除捕获的图像上不一致的信息(例如改变车道的车辆、指示车辆太靠近车辆200的车道模型等)。因此,处理单元110可以基于对一致和不一致信息的确定来选择从第一、第二和第三多个图像中的两个导出的信息。In some embodiments, processing unit 110 may select to derive information from two of the first, second, and third plurality of images by determining the degree to which information derived from one image source agrees with information derived from the other image source. Information. For example, processing unit 110 may combine the processed information derived from each of image capture devices 122, 124, and 126 (whether by monocular analysis, stereoscopic analysis, or any combination of the two) and determine Consistent visual indicators (eg, lane markings, detected vehicles and their positions and/or paths, detected traffic lights, etc.) on images captured by capture devices 122 , 124 , and 126 . The processing unit 110 may also exclude inconsistent information on the captured image (eg, vehicles changing lanes, lane models indicating vehicles are too close to the vehicle 200 , etc.). Accordingly, the processing unit 110 may select information derived from two of the first, second and third plurality of images based on the determination of consistent and inconsistent information.

导航响应可以包括例如转弯、车道偏移、加速度的变化等。处理单元110可以基于在步骤720执行的分析和以上结合图4描述的技术来引起一个或多个导航响应。处理单元110还可使用从速度和加速度模块406的执行导出的数据来引起一个或多个导航响应。在一些实施例中,处理单元110可以基于车辆200和在第一、第二和第三多个图像中的任何一个内检测到的对象之间的相对位置、相对速度和/或相对加速度来引起一个或多个导航响应。多个导航响应可以同时、顺序或其任意组合发生。Navigation responses may include, for example, turns, lane departures, changes in acceleration, and the like. Processing unit 110 may cause one or more navigational responses based on the analysis performed at step 720 and the techniques described above in connection with FIG. 4 . Processing unit 110 may also use data derived from execution of velocity and acceleration module 406 to induce one or more navigational responses. In some embodiments, the processing unit 110 may, based on the relative position, relative velocity and/or relative acceleration between the vehicle 200 and an object detected within any of the first, second and third plurality of images, cause One or more navigation responses. Multiple navigation responses can occur simultaneously, sequentially, or any combination thereof.

具有交叉视场的成像系统Imaging System with Crossed Field of View

在一些实施例中,车辆200可以包括成像系统。成像系统可以包括成像模块,该成像模块配置成安装在主车辆例如自主车辆比如车辆200上或内部。本文中使用的术语“成像模块”是指容纳并定向至少一个相机的结构。成像模块可以包括取决于所采用的相机类型的其他硬件和接线。成像模块可以继而联接到安装组件,该安装组件配置成将成像模块附接到车辆,使得容纳在成像模块内的至少一个相机相对于车辆比如车辆200面向外部。如本文所用的术语“安装组件”是指配置成将安装模块联接到车辆比如车辆200的任何硬件或设备。安装组件可以包括如本领域普通技术人员理解的安装支架、安装吸力联接器、安装框架、安装粘合剂、快速/连接联接器等。In some embodiments, vehicle 200 may include an imaging system. The imaging system may include an imaging module configured to be mounted on or within a host vehicle, eg, an autonomous vehicle such as vehicle 200 . As used herein, the term "imaging module" refers to a structure that houses and orients at least one camera. The imaging module may include additional hardware and wiring depending on the type of camera employed. The imaging module may in turn be coupled to a mounting assembly configured to attach the imaging module to the vehicle such that at least one camera housed within the imaging module faces outward relative to the vehicle, such as vehicle 200 . As used herein, the term “mount assembly” refers to any hardware or device configured to couple a mount module to a vehicle, such as vehicle 200 . The mounting assembly may include mounting brackets, mounting suction couplings, mounting frames, mounting adhesives, quick/connect couplings, etc. as understood by those of ordinary skill in the art.

在整个本公开中,有时术语“交叉”具有如相机和相机光学元件领域的普通技术人员理解的特定几何含义。例如,术语“交叉”并不一定表示实际的物理相交,甚至是投影的相交,而是指在至少一个平面中的至少两个投影的重叠。对于与地面平行且源自不同高度(从地面测量)处的替代光源的基于矢量的光轴而言,尤其如此。例如,考虑向外伸出并在1米高度处平行于地面的第一基于矢量的光轴和向外伸出并在1.2米高度处平行于地面的第二基于矢量的光轴。根据该示例,即使矢量不一定相交,当投影到平面图上时,第一基于矢量的光轴也可能在第二基于矢量的光轴上“交叉”。Throughout this disclosure, the term "intersection" is sometimes used to have a specific geometric meaning as understood by those of ordinary skill in the art of cameras and camera optics. For example, the term "intersection" does not necessarily denote an actual physical intersection, or even an intersection of projections, but rather an overlap of at least two projections in at least one plane. This is especially true for vector-based optical axes that are parallel to the ground and originate from alternative light sources at different heights (measured from the ground). For example, consider a first vector-based optical axis projecting outward and parallel to the ground at a height of 1 meter and a second vector-based optical axis projecting outward and parallel to the ground at a height of 1.2 meters. According to this example, a first vector-based optical axis may "cross" a second vector-based optical axis when projected onto a plan view, even though the vectors do not necessarily intersect.

换句话说,如果将基于第一三维矢量的光轴投影为第一二维光轴(或第一线)和将基于第二三维矢量的光轴投影为第二二维光轴(或第二线)并且从平面透视图(或另一二维透视图)观察它们,则第一光轴和第二光轴看起来在“交叉点”处彼此相交。另外,应当理解,“交叉点”是从几何原理导出的,不一定是两个光轴在三维空间中的实际相交。In other words, if the optical axis based on the first 3D vector is projected as the first 2D optical axis (or first line) and the optical axis based on the second 3D vector is projected as the second 2D optical axis (or second line ) and viewing them from a planar perspective (or another two-dimensional perspective), the first and second optical axes appear to intersect each other at a "point of intersection". In addition, it should be understood that the "intersection point" is derived from geometric principles, not necessarily the actual intersection of two optical axes in three-dimensional space.

理解“交叉点”的另一种方式是,“交叉点”是在与从几何原理导出的基于第一矢量的光轴和基于第二矢量的光轴交叉的平面上的点的位置。类似地,“交叉平面”应理解为与至少两个基于矢量的光轴相交的几何平面。因此,“交叉平面”包括对应于至少两个光轴的投影相交的相应“交叉点”。Another way to understand "intersection point" is that it is the position of a point on a plane that intersects a first vector-based optical axis and a second vector-based optical axis derived from geometrical principles. Similarly, "intersection plane" is to be understood as a geometric plane intersecting at least two vector-based optical axes. Thus, an "intersection plane" comprises a respective "intersection point" corresponding to the intersection of the projections of at least two optical axes.

在一些实施例中,成像系统可包括至少两个相机,它们具有彼此交叠并形成组合视场的各自视场。在一些实施例中,成像系统可以包括三个或更多个相机,它们具有与至少一个其他相机交叉并且形成组合视场的相应视场。In some embodiments, an imaging system may include at least two cameras having respective fields of view that overlap each other and form a combined field of view. In some embodiments, an imaging system may include three or more cameras having respective fields of view that intersect at least one other camera and form a combined field of view.

在一些实施例中,安装组件可以配置成将成像模块附接到车辆,使得相机相对于车辆面向外部。在一些实施例中,安装组件还可以定向相机,使得它们相对于车辆面向外部并平行于地面。安装组件可以配置成将成像模块附接到车辆的内部窗户或车辆的另一部件,例如保险杠、立柱、前灯、尾灯等。在一些实施例中,安装组件也可以配置成补偿窗户的倾斜。In some embodiments, the mounting assembly may be configured to attach the imaging module to the vehicle such that the camera faces outward relative to the vehicle. In some embodiments, the mounting assembly may also orient the cameras so that they face outward relative to the vehicle and parallel to the ground. The mounting assembly may be configured to attach the imaging module to an interior window of the vehicle or another component of the vehicle, such as a bumper, pillar, headlight, taillight, or the like. In some embodiments, the mounting assembly may also be configured to compensate for window tilt.

在一些实施例中,成像系统可以包括具有擦拭器刮片的擦拭器组件,该擦拭器刮片配置成从各个相机的视场中去除遮挡物。擦拭器组件还可以包括配置成根据需要控制擦拭器刮片的定时器、传感器和电动机。In some embodiments, an imaging system may include a wiper assembly having a wiper blade configured to remove obstructions from the fields of view of the respective cameras. The wiper assembly may also include timers, sensors and motors configured to control the wiper blade as desired.

在一些实施例中,成像系统可以包括眩光屏幕或滤光器。眩光屏幕或滤光器可以通过减少来自窗户的倾斜度(例如斜率)引起的入射光的眩光来改善成像系统的性能。此外,防眩罩可以配置成向相机提供光圈,从而增加景深,从而导致在宽范围距离处的多个不同对象保持聚焦。In some embodiments, the imaging system may include a glare screen or filter. Glare screens or filters can improve the performance of imaging systems by reducing glare from incident light caused by the inclination (eg, slope) of the window. In addition, the anti-glare shield can be configured to provide an aperture to the camera, thereby increasing the depth of field, resulting in multiple different objects at a wide range of distances remaining in focus.

图8是具有两个相机的成像系统的实施例的示意性表示。该示例性实施例可以至少包括第一相机805和第二相机809。尽管在图8中示出了两个相机,但是在一些实施例中,成像系统可以包括两个以上的相机(例如三个相机、四个相机、五个相机等)。在一些实施例中,相机(例如第一相机805和第二相机809)可以共享如上所述的图像捕获设备122、124和126的一个或多个特征。Figure 8 is a schematic representation of an embodiment of an imaging system with two cameras. This exemplary embodiment may include at least a first camera 805 and a second camera 809 . Although two cameras are shown in FIG. 8, in some embodiments, the imaging system may include more than two cameras (eg, three cameras, four cameras, five cameras, etc.). In some embodiments, cameras (eg, first camera 805 and second camera 809 ) may share one or more features of image capture devices 122 , 124 , and 126 as described above.

如图8所示,第一相机805具有带有光轴805a的第一视场,第二相机809具有带有光轴809a的第二视场。根据一些实施例,第一相机805和第二相机809布置(例如定向)成使得第一相机805的第一光轴805a在至少一个平面(例如水平平面、竖直平面或者水平平面和竖直平面二者)中与第二相机809的第二光轴809a交叉。在一些实施例中,第一相机805和第二相机809可以用成像模块固定,该成像模块固定或联接到安装组件,该安装组件又固定或联接到安装支架。在一些实施例中,成像模块配置成沿着半圆弧布置第一相机805和第二相机809。As shown in Figure 8, the first camera 805 has a first field of view with an optical axis 805a, and the second camera 809 has a second field of view with an optical axis 809a. According to some embodiments, the first camera 805 and the second camera 809 are arranged (e.g., oriented) such that the first optical axis 805a of the first camera 805 is in at least one plane (e.g., a horizontal plane, a vertical plane, or a horizontal plane and a vertical plane). both) intersects the second optical axis 809a of the second camera 809. In some embodiments, the first camera 805 and the second camera 809 may be secured with an imaging module that is secured or coupled to a mounting assembly that is in turn secured or coupled to a mounting bracket. In some embodiments, the imaging module is configured such that the first camera 805 and the second camera 809 are arranged along a semicircular arc.

应该理解的是,图8示出了相机805和809在二维(X,Y)或“2D”中的视图投影。光轴805a和809a是基于矢量的光轴,尽管仅出于说明目的将它们示出为二维投影。如图所示,光轴805a在交叉平面(未示出)的交叉点888处与光轴809a交叉,尽管光轴805a和809a在所示的2D表示中看起来相交,但它们可能在3D中实际上并未相交。如图所示,交叉点888与空白透明区域803的中心区域重合。这样,光轴805a在水平平面中与光轴809a交叉。在其他实施例中,光轴805a在竖直平面中与光轴809a交叉。仍然在其他实施例中,光轴805a在水平平面和竖直平面中与光轴809a交叉。尽管在与相对较小且透明区域803的中心区域重合的位置处示出了交叉点888,但是交叉点888可以不同地定位。例如,交叉平面的交叉点888可以位于距第一相机805和第二相机809更远,使得其位于相对较小且透明区域803的外部。可替代地,交叉平面的交叉点888可以位于更靠近第一相机805和第二相机809,使得其位于相对较小和透明区域803的外部。这样,交叉平面的交叉点888可以位于距相对较小和透明区域803的预定距离处,例如在约0.2米至2.0米或0.5米至1.0米的范围内。在至少一个实施例中,第一相机805和第二相机809配置成安装在车辆的窗户后面,并且交叉平面的交叉点888位于该窗户与第一相机805和第二相机809之间。在至少一个实施例中,第一相机805和第二相机809配置成安装在车辆的窗户后面,并且交叉平面的交叉点888位于距窗户的外表面预定距离处。It should be appreciated that FIG. 8 shows the view projections of cameras 805 and 809 in two dimensions (X, Y) or "2D." Optical axes 805a and 809a are vector-based optical axes, although they are shown as two-dimensional projections for illustration purposes only. As shown, optical axis 805a intersects optical axis 809a at intersection point 888 of an intersection plane (not shown), although optical axes 805a and 809a appear to intersect in the 2D representation shown, they may in 3D doesn't actually intersect. As shown, intersection point 888 coincides with the central area of blank transparent area 803 . Thus, the optical axis 805a intersects the optical axis 809a in the horizontal plane. In other embodiments, optical axis 805a intersects optical axis 809a in a vertical plane. In still other embodiments, the optical axis 805a intersects the optical axis 809a in the horizontal and vertical planes. Although intersection point 888 is shown at a location coinciding with a central area of relatively small and transparent region 803, intersection point 888 may be positioned differently. For example, the intersection point 888 of the intersecting planes may be located farther from the first camera 805 and the second camera 809 such that it is located outside the relatively small and transparent area 803 . Alternatively, the intersection point 888 of the intersection planes may be located closer to the first camera 805 and the second camera 809 such that it is located outside the relatively small and transparent area 803 . As such, the intersection point 888 of the intersecting planes may be located at a predetermined distance from the relatively small and transparent region 803, for example in the range of about 0.2 meters to 2.0 meters or 0.5 meters to 1.0 meters. In at least one embodiment, the first camera 805 and the second camera 809 are configured to be mounted behind a window of the vehicle, and the intersection point 888 of the intersecting plane is located between the window and the first camera 805 and the second camera 809 . In at least one embodiment, the first camera 805 and the second camera 809 are configured to be mounted behind a window of the vehicle, and the intersection point 888 of the intersecting planes is located at a predetermined distance from the outer surface of the window.

在示例性实施例中,第一相机805聚焦在焦点P1处,第二相机809聚焦在焦点P2处。这样,焦点P1位于超过交叉平面的交叉点888的第一水平距离处,焦点P2位于超过交叉平面的交叉点888的第二水平距离处。如图所示,第一水平距离和第二水平距离是基本相等的距离,尽管在替代实施例中它们可以是不同的距离。例如,P1可以是P2的水平距离的约1.5倍。在其他实施例中,P1可以是P2的水平距离的约1.25倍、1.75倍、2.0倍、2.5倍或3.0倍。此外,应当注意,P1和P2不一定是三维空间中的奇异点,即如相机和相机光学元件领域的普通技术人员理解,焦点P1和焦点P2可以每个包含各自的焦点区域。此外,对应于焦点P1的焦点区域和对应于焦点P2的焦点区域可以至少部分地重叠。In an exemplary embodiment, the first camera 805 is focused at a focal point P1 and the second camera 809 is focused at a focal point P2. Thus, the focal point P1 is located at a first horizontal distance beyond the intersection point 888 of the intersecting planes and the focal point P2 is located at a second horizontal distance beyond the intersection point 888 of the intersecting planes. As shown, the first horizontal distance and the second horizontal distance are substantially equal distances, although they may be different distances in alternative embodiments. For example, P1 may be about 1.5 times the horizontal distance of P2. In other embodiments, P1 may be about 1.25, 1.75, 2.0, 2.5, or 3.0 times the horizontal distance of P2. Furthermore, it should be noted that P1 and P2 are not necessarily singular points in three-dimensional space, ie, the focal points P1 and P2 may each contain a respective focal area as understood by those of ordinary skill in the art of cameras and camera optics. Furthermore, the focal area corresponding to the focal point P1 and the focal point area corresponding to the focal point P2 may at least partially overlap.

在示例性实施例中,交叉平面的交叉点888与第一相机805和第二相机809间隔开间隔距离Dy,其约等于第一相机805的透镜与第二相机807的透镜之间的最短距离Dx。如图所示,第一相机805的透镜与第二相机807的透镜之间的最短距离由Dx表示,交叉平面的交叉点888与第一相机805和第二相机809之间的间隔距离由Dy表示。应当理解,可以从相机805或相机809的透镜测量间隔距离Dy,因此间隔距离Dy可以不同。在一些实施例中,间隔距离Dy可以落在最短距离Dx的一倍至四倍的范围内。在其他实施例中,间隔距离Dy可以落在最短距离Dx的一至二倍、二至三倍、三至四倍、二至三倍或二至四倍的范围内。在一些实施例中,Dx和Dy可以表示为定义可以定位交叉平面的交叉点(例如交叉点888)的距离的比率。例如,Dy≤N×Dx,其中2≤N≥4。In an exemplary embodiment, the intersection point 888 of the intersecting planes is spaced from the first camera 805 and the second camera 809 by a separation distance Dy approximately equal to the shortest distance between the lens of the first camera 805 and the lens of the second camera 807 Dx. As shown, the shortest distance between the lens of the first camera 805 and the lens of the second camera 807 is represented by Dx, and the separation distance between the intersection point 888 of the intersecting plane and the first camera 805 and the second camera 809 is represented by Dy express. It should be understood that the separation distance Dy can be measured from the lens of the camera 805 or the camera 809 and thus the separation distance Dy can be different. In some embodiments, the separation distance Dy may fall within a range of one to four times the shortest distance Dx. In other embodiments, the separation distance Dy may fall within a range of one to two times, two to three times, three to four times, two to three times, or two to four times the shortest distance Dx. In some embodiments, Dx and Dy may be expressed as a ratio defining a distance at which an intersection point (eg, intersection point 888 ) of an intersection plane may be located. For example, Dy≤N×Dx, where 2≤N≥4.

如图所示,相机805的第一视场和相机809的第二视场重叠并形成约90度的组合视场。在一些实施例中,相机805的第一视场和相机809的第二视场可以仅部分重叠,但是仍形成组合视场。空白透明区域803可以由部件801描绘为边界。部件801可以是诸如车辆部件之类的固体特征,例如立柱、保险杠、门面板、前灯、侧窗、前窗等。在至少一个实施例中,部件801可以包括空白透明区域803,其允许光从中穿过,使得空白透明区域803至少部分地由与图8的阴影区域(即部件801)相对应的非透明区域围绕。这样,相对较小且透明区域803小于具有等于第一相机和第二相机的组合视场的广角视场的广角相机所需的相当透明区域。在其他实施例中,部件801可以是粘附到主车辆外部的成像支架或模块的周边。As shown, the first field of view of camera 805 and the second field of view of camera 809 overlap and form a combined field of view of approximately 90 degrees. In some embodiments, the first field of view of camera 805 and the second field of view of camera 809 may only partially overlap, but still form a combined field of view. A blank transparent area 803 may be delineated by component 801 as a border. Component 801 may be a solid feature such as a vehicle component, such as a pillar, bumper, door panel, headlight, side window, front window, or the like. In at least one embodiment, component 801 may include a blank transparent region 803 that allows light to pass therethrough such that blank transparent region 803 is at least partially surrounded by a non-transparent region corresponding to the shaded region of FIG. 8 (ie, component 801 ). . As such, the relatively small and transparent area 803 is smaller than the relatively transparent area required for a wide-angle camera having a wide-angle field of view equal to the combined field of view of the first and second cameras. In other embodiments, component 801 may be the perimeter of an imaging mount or module adhered to the exterior of the host vehicle.

图9是单个相机宽视场系统的示意性表示。在一些实施例中,相机(例如单个相机905)可以共享如上所述的图像捕获设备122、124和126的一个或多个特征。单个相机905具有视场,其具有光轴905a,其从相对大的空白透明区域903向外伸出。如图所示,光轴905a向外伸出并且将单个相机905的视场划分为两个对称区域。单个相机905的视场基本上等于图8的第一相机801和第二相机809的组合视场。如图所示,单个相机905具有约90度的视场。通过比较图9和图8可以看出,空白透明区域903(其可以由部件901描绘为边界)大于图8的透明区域803,但是图8的成像系统的组合视场基本上等于图9的单个相机的视场。空白透明区域903必须大于空白透明区域803,以容纳具有类似的覆盖区域的视场,因为单个相机宽视场系统使用单个相机905。因此,与单个宽视场系统相关的占用面积大于图8的实施例的占用面积。本公开的目的是尽可能减小成像系统的占用面积,同时容纳宽视场(组合视场)。Figure 9 is a schematic representation of a single camera wide field of view system. In some embodiments, cameras (eg, single camera 905 ) may share one or more features of image capture devices 122 , 124 , and 126 as described above. A single camera 905 has a field of view with an optical axis 905 a protruding from a relatively large empty transparent area 903 . As shown, the optical axis 905a projects outwardly and divides the field of view of a single camera 905 into two symmetrical regions. The field of view of the single camera 905 is substantially equal to the combined field of view of the first camera 801 and the second camera 809 of FIG. 8 . As shown, a single camera 905 has a field of view of approximately 90 degrees. As can be seen by comparing Figures 9 and 8, the blank transparent area 903 (which may be delineated by part 901 as a boundary) is larger than the transparent area 803 of Figure 8, but the combined field of view of the imaging system of Figure 8 is substantially equal to the single The field of view of the camera. Blank transparent area 903 must be larger than blank transparent area 803 to accommodate fields of view with similar coverage areas since a single camera widefield system uses a single camera 905 . Accordingly, the footprint associated with a single wide field of view system is larger than that of the embodiment of FIG. 8 . It is an object of the present disclosure to minimize the footprint of the imaging system while accommodating a wide field of view (combined field of view).

图10是具有三个相机的成像系统的实施例的示意性表示。该示例性实施例可以类似于图8的实施例。示例性实施例可以包括第一相机1005、第二相机1007和第三相机1009。在一些实施例中,相机(例如第一相机1005、第二相机1007和第三相机1009)可以共享如上所述的图像捕获设备122、124和126的一个或多个特性。Figure 10 is a schematic representation of an embodiment of an imaging system with three cameras. This exemplary embodiment may be similar to the embodiment of FIG. 8 . An exemplary embodiment may include a first camera 1005 , a second camera 1007 and a third camera 1009 . In some embodiments, cameras (eg, first camera 1005, second camera 1007, and third camera 1009) may share one or more characteristics of image capture devices 122, 124, and 126 as described above.

如图10所示,第一相机1005具有带有光轴1005a的第一视场,第二相机1009具有带有光轴1009a的第二视场,第三相机1007具有带有光轴1007a的第三视场。应该理解的是,图10示出了相机1005、1007和1009在二维(X,Y)或“2D”中的视图投影。光轴1005a、1007a和1009a是基于矢量的光轴,尽管仅出于说明目的将它们示为二维投影。如图所示,光轴1005a、光轴1009a和光轴1007a在交叉平面(例如水平平面、竖直平面或水平平面和竖直平面两者)的至少一个交叉点1010彼此交叉。在该示例性实施例中,第三相机1007基本上定位在第一相机1005和第二相机1009的中心,并且与它们等距。然而,在其他实施例中,第三相机1007可以交替地定位,例如不是在中心和/或更靠近第一相机1005或第二相机1009。仍然在其他实施例中,第三相机1007可以定位在第一相机1005和第二相机1009的前方,例如更靠近相对较小和空白透明区域1003。如图所示,空白透明区域1003可以由部件1001描绘为边界。部件1001可以是诸如车辆部件之类的固体特征,例如立柱、保险杠、门面板、前灯、侧窗、前窗等。在一些实施例中,第一相机1005、第二相机1009和第三相机1007可以用成像模块固定,该成像模块固定或联接到安装组件,该安装组件又固定或联接到安装支架。在一些实施例中,成像模块可以配置成沿着半圆弧布置第一相机1005、第二相机1009和第三相机1007。例如,成像模块可以成形为半圆。As shown in FIG. 10, the first camera 1005 has a first field of view with an optical axis 1005a, the second camera 1009 has a second field of view with an optical axis 1009a, and the third camera 1007 has a first field of view with an optical axis 1007a. Three fields of view. It should be appreciated that FIG. 10 shows the view projections of cameras 1005, 1007, and 1009 in two dimensions (X, Y) or "2D." Optical axes 1005a, 1007a, and 1009a are vector-based optical axes, although they are shown as two-dimensional projections for illustration purposes only. As shown, optical axis 1005a, optical axis 1009a, and optical axis 1007a intersect each other at at least one intersection point 1010 in an intersecting plane (eg, a horizontal plane, a vertical plane, or both). In this exemplary embodiment, the third camera 1007 is positioned substantially in the center of the first camera 1005 and the second camera 1009 and is equidistant from them. However, in other embodiments, the third camera 1007 may be positioned alternately, for example not in the center and/or closer to the first camera 1005 or the second camera 1009 . Still in other embodiments, the third camera 1007 may be positioned in front of the first camera 1005 and the second camera 1009 , eg closer to the relatively small and blank transparent area 1003 . As shown, an empty transparent area 1003 may be delineated by component 1001 as a border. Part 1001 may be a solid feature such as a vehicle part, such as a pillar, bumper, door panel, headlight, side window, front window, or the like. In some embodiments, the first camera 1005, the second camera 1009, and the third camera 1007 may be secured with an imaging module that is secured or coupled to a mounting assembly that is in turn secured or coupled to a mounting bracket. In some embodiments, the imaging module can be configured to arrange the first camera 1005 , the second camera 1009 and the third camera 1007 along a semicircular arc. For example, the imaging module may be shaped as a semicircle.

如图所示,光轴1005a、1009a和1007a在交叉平面(未示出)的交叉点1010处彼此交叉。如图所示,交叉平面的交叉点1010与空白透明区域1003的中心区域重合。然而,应当注意,光轴1005a、1009a和1007a可以以任何方式独立地交叉。即,可能存在附加的交叉点和/或交叉平面(未示出)。另外,在一些实施例中,交叉平面的交叉点1010可以代表恰好重合的独立交叉平面的多个独立交叉点。例如,光轴1005a和1009a交叉以形成第一交叉平面的第一交叉点,光轴1005a和1007a交叉以形成第二交叉平面的第二交叉点,光轴1009a和1007a交叉以形成第三交叉平面的第三交叉点。如图所示,第一交叉平面的第一交叉点、第二交叉平面的第二交叉点和第三交叉平面的第三交叉点彼此重合,并且形成表示为交叉点1010的重合交叉平面的重合交叉点。因此,应当理解,术语“交叉点”是指至少两个光轴(比如光轴1005a、1009a和1007a)在至少一个平面例如水平平面和/或竖直平面中彼此交叉的位置。As shown, the optical axes 1005a, 1009a, and 1007a intersect each other at an intersection point 1010 of an intersection plane (not shown). As shown, the intersection point 1010 of the intersection plane coincides with the central area of the blank transparent area 1003 . It should be noted, however, that optical axes 1005a, 1009a, and 1007a may independently intersect in any manner. That is, there may be additional intersection points and/or intersection planes (not shown). Additionally, in some embodiments, intersection 1010 of intersecting planes may represent multiple independent intersections of coincident independent intersecting planes. For example, optical axes 1005a and 1009a intersect to form a first intersection point of a first intersection plane, optical axes 1005a and 1007a intersect to form a second intersection point of a second intersection plane, and optical axes 1009a and 1007a intersect to form a third intersection plane the third intersection point. As shown, the first intersection point of the first intersection plane, the second intersection point of the second intersection plane, and the third intersection point of the third intersection plane coincide with each other and form a coincidence of coincident intersection planes denoted intersection point 1010 intersection. Therefore, it should be understood that the term "intersection point" refers to a position where at least two optical axes (such as optical axes 1005a, 1009a and 1007a) intersect each other in at least one plane, such as a horizontal plane and/or a vertical plane.

尽管在与相对较小且透明区域1003的中心区域重合的位置处示出了交叉平面的交叉点1010,但是交叉平面的交叉点1010可以不同地定位。例如,交叉平面的交叉点1010可以位于距第一相机1005、第二相机1009和/或第三相机1007更远,使得其位于相对较小且透明区域1003的外部。可替代地,交叉平面的交叉点1010可以位于更靠近第一相机1005、第二相机1009和/或第三相机1007,使得其位于相对较小且透明区域803的外部。这样,交叉平面的交叉点1010可以位于距相对较小且透明区域1003的预定距离,例如在约0.01米至2.0米、0.1米至0.5米或0.5米至1.0米的范围内。Although the intersection point 1010 of the intersecting planes is shown at a location coinciding with the central area of the relatively small and transparent region 1003, the intersection point 1010 of the intersecting planes may be positioned differently. For example, the intersection point 1010 of the intersecting planes may be located further from the first camera 1005 , the second camera 1009 and/or the third camera 1007 such that it is located outside the relatively small and transparent area 1003 . Alternatively, the intersection point 1010 of the intersection planes may be located closer to the first camera 1005 , the second camera 1009 and/or the third camera 1007 such that it is located outside the relatively small and transparent area 803 . As such, the intersection point 1010 of the intersecting planes may be located a predetermined distance from the relatively small and transparent region 1003, for example in the range of about 0.01 to 2.0 meters, 0.1 to 0.5 meters, or 0.5 to 1.0 meters.

在示例性实施例中,第一相机1005聚焦在焦点P1处,第二相机1009聚焦在焦点P2处,第三相机1007聚焦在焦点P3处。这样,焦点P1位于超过交叉点1010的交叉点的第一水平距离,焦点P2位于超过交叉点1010的第二水平距离,焦点P3位于超过交叉点1010的第三水平距离。如图所示,第一、第二和第三水平距离基本上是相等的距离,尽管在替代实施例中它们可以是不同的距离。例如,P1可以是P2和/或P3的水平距离的约1.5倍,反之亦然。在其他实施例中,P1可以是P2的水平距离的约1.25倍、1.75倍、2.0倍、2.5倍或3.0倍。此外,应当注意,P1、P2和P3不一定是三维空间中的奇异点,即它们可以每个包含各自的聚焦区域,如相机和相机光学元件领域的普通技术人员理解。更进一步,对应于P1、P2和P3的焦点区域可以至少部分地重叠。In an exemplary embodiment, the first camera 1005 is focused at a focal point P1, the second camera 1009 is focused at a focal point P2, and the third camera 1007 is focused at a focal point P3. Thus, the focal point P1 is located a first horizontal distance beyond the intersection 1010 , the focal point P2 is located a second horizontal distance beyond the intersection 1010 , and the focal point P3 is located a third horizontal distance beyond the intersection 1010 . As shown, the first, second and third horizontal distances are substantially equal distances, although they may be different distances in alternative embodiments. For example, P1 may be about 1.5 times the horizontal distance of P2 and/or P3, and vice versa. In other embodiments, P1 may be about 1.25, 1.75, 2.0, 2.5, or 3.0 times the horizontal distance of P2. Furthermore, it should be noted that P1 , P2 and P3 are not necessarily singular points in three-dimensional space, ie they may each contain a respective focal region, as understood by those of ordinary skill in the art of cameras and camera optics. Still further, the focal areas corresponding to P1, P2 and P3 may at least partially overlap.

在示例性实施例中,交叉点1010的交叉点与第一相机1005、第二相机1009和第三相机1007间隔开间隔距离Dy,其约等于第一相机1005的透镜和第二相机1007的透镜之间的最短距离Dx。如图所示,第一相机1005的透镜和第二相机1007的透镜之间的最短距离由Dx表示,交叉平面的交叉点1010与第一相机1005和第二相机1009之间的间隔距离由Dy表示。应当理解,可以从相机1005、相机1009或相机1007的透镜测量间隔距离Dy,因此间隔距离Dy可以不同。还应该理解,在各个相机之间可以存在唯一的最短距离Dx。这样,每个相机1005、1009和1007可以具有各自的最短间隔距离Dy和各自的最短距离Dx。在一些实施例中,间隔距离Dy可以落在最短距离Dx的一至四倍的范围内。在其他实施例中,间隔距离Dy可以落在最短距离Dx的一至二倍、二至三倍、三至四倍、二至三倍或二至四倍的范围内。在一些实施例中,Dx可以指的是彼此相距最远的两个相机例如第一相机1005和第二相机1009之间的间隔。在一些实施例中,Dx和Dy可以表示为定义可以定位交叉平面的交叉点1010的距离的比率。例如,Dy≤N×Dx,其中2≤N≥4。In an exemplary embodiment, the intersection point of intersection point 1010 is spaced apart from first camera 1005, second camera 1009, and third camera 1007 by a separation distance Dy approximately equal to the lens of first camera 1005 and the lens of second camera 1007. The shortest distance Dx between. As shown, the shortest distance between the lens of the first camera 1005 and the lens of the second camera 1007 is represented by Dx, and the separation distance between the intersection point 1010 of the intersecting plane and the first camera 1005 and the second camera 1009 is represented by Dy express. It should be understood that the separation distance Dy may be measured from the lens of the camera 1005, the camera 1009 or the camera 1007 and thus may be different. It should also be understood that there may be a unique shortest distance Dx between the various cameras. In this way, each camera 1005, 1009 and 1007 may have a respective shortest separation distance Dy and a respective shortest distance Dx. In some embodiments, the separation distance Dy may fall within a range of one to four times the shortest distance Dx. In other embodiments, the separation distance Dy may fall within a range of one to two times, two to three times, three to four times, two to three times, or two to four times the shortest distance Dx. In some embodiments, Dx may refer to the separation between the two cameras that are farthest from each other, such as the first camera 1005 and the second camera 1009 . In some embodiments, Dx and Dy may be expressed as a ratio defining the distance over which the intersection point 1010 of the intersection plane may be located. For example, Dy≤N×Dx, where 2≤N≥4.

如图所示,第一相机1005的视场与第二相机1009的第二视场重叠。第一相机1005和第二相机1009的视场均与第三相机1007的视场重叠。在该示例性实施例中,光轴1005a在相对较小且空白透明区域1003的中心区域中与光轴1009a和光轴1007a交叉。这样,相对较小且透明区域1003小于具有等于第一相机、第二相机和第三相机的组合视场的广角视场的广角相机所需的相当透明区域。如图所示,相机1005的第一视场、相机1009的第二视场和相机1007的第三视场形成约150度的组合视场。在其他实施例中,组合视场的范围可以从45度到180度。例如,组合视场可以是55度、65度、75度、85度、95度、105度、115度、125度、135度、145度、155度、165度或175度。As shown, the field of view of the first camera 1005 overlaps with the second field of view of the second camera 1009 . Both the fields of view of the first camera 1005 and the second camera 1009 overlap with the field of view of the third camera 1007 . In this exemplary embodiment, optical axis 1005a intersects optical axis 1009a and optical axis 1007a in the central region of relatively small and empty transparent region 1003 . As such, the relatively small and transparent area 1003 is smaller than the substantially transparent area required for a wide-angle camera having a wide-angle field of view equal to the combined fields of view of the first, second, and third cameras. As shown, the first field of view of camera 1005, the second field of view of camera 1009, and the third field of view of camera 1007 form a combined field of view of approximately 150 degrees. In other embodiments, the combined field of view may range from 45 degrees to 180 degrees. For example, the combined field of view can be 55 degrees, 65 degrees, 75 degrees, 85 degrees, 95 degrees, 105 degrees, 115 degrees, 125 degrees, 135 degrees, 145 degrees, 155 degrees, 165 degrees, or 175 degrees.

图11A是与图10的三相机实施例一致的示例性成像系统的示意性平面图表示。如图所示,图11A的示例性成像系统可以安装在车辆99的后侧窗户上或者安装在诸如车辆99的车身的立柱部分的部件上。在一些实施例中,车辆99可以是自主车辆。为了易于理解,示出了车辆99具有中心纵向轴线Cx,其将车辆99纵向地分为两个基本对称的半部。该示例性成像系统包括三个相机,每个相机具有各自的视场。例如,第一视场F1对应于具有66度小盖透镜的第一相机,第二视场F2对应于具有66度小盖透镜的第二相机,第三视场F3对应于具有66度小盖透镜的第三相机。在示例性实施例中,每个相机具有66度的视场。第一视场F1与第三视场F3重叠。同样,第二视场与第三视场F3重叠。11A is a schematic plan view representation of an exemplary imaging system consistent with the three-camera embodiment of FIG. 10 . As shown, the example imaging system of FIG. 11A may be mounted on a rear side window of a vehicle 99 or on a component such as a pillar portion of the body of the vehicle 99 . In some embodiments, vehicle 99 may be an autonomous vehicle. For ease of understanding, the vehicle 99 is shown to have a central longitudinal axis Cx that divides the vehicle 99 longitudinally into two substantially symmetrical halves. The exemplary imaging system includes three cameras, each with a respective field of view. For example, the first field of view F1 corresponds to the first camera with a small cap lens of 66 degrees, the second field of view F2 corresponds to the second camera with a small cap lens of 66 degrees, and the third field of view F3 corresponds to the camera with a small cap lens of 66 degrees. Lens for the third camera. In an exemplary embodiment, each camera has a field of view of 66 degrees. The first field of view F1 overlaps with the third field of view F3. Likewise, the second field of view overlaps with the third field of view F3.

在图11A中,第一视场F1可以与车辆99的中心纵向轴线Cx偏离5度。在其他实施例中,F1可以与车辆99的中心纵向轴线Cx偏离5度到30度的范围,例如10度、15度、20度或25度。第二视场F2可以与车辆99的中心纵向轴线Cx偏离13度。在其他实施例中,F2可以与车辆99的中心纵向轴线Cx偏离5度到30度的范围,例如10度、15度、20度或25度。在示例性实施例中,第三视场F3与第一视场F1和第二视场F2重叠相等的量。在其他实施例中,第三视场F3基本垂直于车辆99的中心纵向轴线Cx。在其他实施例中,第三视场F3可以与车辆99的中心纵向轴线Cx偏离,即其光轴不垂直于中心纵向轴线Cx。In FIG. 11A , the first field of view F1 may be offset by 5 degrees from the central longitudinal axis Cx of the vehicle 99 . In other embodiments, F1 may be offset from the central longitudinal axis Cx of the vehicle 99 in the range of 5 degrees to 30 degrees, such as 10 degrees, 15 degrees, 20 degrees or 25 degrees. The second field of view F2 may be offset by 13 degrees from the central longitudinal axis Cx of the vehicle 99 . In other embodiments, F2 may be offset from the central longitudinal axis Cx of the vehicle 99 by a range of 5 degrees to 30 degrees, such as 10 degrees, 15 degrees, 20 degrees or 25 degrees. In an exemplary embodiment, the third field of view F3 overlaps the first field of view F1 and the second field of view F2 by an equal amount. In other embodiments, the third field of view F3 is substantially perpendicular to the central longitudinal axis Cx of the vehicle 99 . In other embodiments, the third field of view F3 may be offset from the central longitudinal axis Cx of the vehicle 99 , ie its optical axis is not perpendicular to the central longitudinal axis Cx.

在示例性实施例中,组合视场为162度。在其他实施例中,组合视场可以更大或更小。例如,在其他示例性实施例中,组合视场的范围可以从100度到175度,例如110度、120度、130度、140度、150度、165度等。In an exemplary embodiment, the combined field of view is 162 degrees. In other embodiments, the combined field of view may be larger or smaller. For example, in other exemplary embodiments, the combined field of view may range from 100 degrees to 175 degrees, such as 110 degrees, 120 degrees, 130 degrees, 140 degrees, 150 degrees, 165 degrees, and the like.

图11B是与图8的两相机实施例一致的示例性成像系统的示意性平面图表示。示例性成像系统类似于图11A的实施例。因此,将不再详细说明类似的特征。在示例性实施例中,第一视场F1B对应于具有66度小盖透镜的第一相机,第三视场F3B对应于具有66度小盖透镜的第二相机。在示例性实施例中,每个相机具有66度的视场。第一视场F1B与第三视场F3B重叠。11B is a schematic plan view representation of an exemplary imaging system consistent with the two-camera embodiment of FIG. 8 . An exemplary imaging system is similar to the embodiment of FIG. 11A. Therefore, similar features will not be described in detail. In an exemplary embodiment, the first field of view F 1B corresponds to the first camera with a small 66-degree cap lens, and the third field of view F 3B corresponds to the second camera with a small 66-degree cap lens. In an exemplary embodiment, each camera has a field of view of 66 degrees. The first field of view F 1B overlaps with the third field of view F 3B .

在图11B中,第一视场F1B可以与车辆99的中心纵向轴线Cx偏离5度。在其他实施例中,F1B可以与车辆99的中心纵向轴线Cx偏离5度到30度的范围,例如10度、15度、20度或25度。第二视场F3B可以与车辆99的中心纵向轴线Cx偏离60度。在其他实施例中,F3B可以与车辆99的中心纵向轴线Cx偏离30度到90度的范围,例如10度、20度、30度、40度、50度、60度、70度、80度或90度。In FIG. 11B , the first field of view F 1B may be offset by 5 degrees from the central longitudinal axis Cx of the vehicle 99 . In other embodiments, F 1B may be offset from the central longitudinal axis Cx of the vehicle 99 by a range of 5 degrees to 30 degrees, such as 10 degrees, 15 degrees, 20 degrees or 25 degrees. The second field of view F 3B may be offset by 60 degrees from the central longitudinal axis Cx of the vehicle 99 . In other embodiments, the F 3B may be offset from the central longitudinal axis Cx of the vehicle 99 by a range of 30 degrees to 90 degrees, such as 10 degrees, 20 degrees, 30 degrees, 40 degrees, 50 degrees, 60 degrees, 70 degrees, 80 degrees or 90 degrees.

在示例性实施例中,组合视场为115度。在其他实施例中,组合视场可以更大或更小。例如,在其他示例性实施例中,组合视场的范围可以从90度到175度,例如95度、100度、110度、120度、130度、140度、150度、165度等。In an exemplary embodiment, the combined field of view is 115 degrees. In other embodiments, the combined field of view may be larger or smaller. For example, in other exemplary embodiments, the combined field of view may range from 90 degrees to 175 degrees, such as 95 degrees, 100 degrees, 110 degrees, 120 degrees, 130 degrees, 140 degrees, 150 degrees, 165 degrees, and the like.

图12A是与图10的三相机实施例一致的示例性成像系统的示意性平面图表示。在图12A中,示例性成像系统包括三个相机,每个相机具有各自的视场。例如,第一视场F4对应于具有52度小盖透镜的第一相机,第二视场F5对应于具有52度小盖透镜的第二相机,第三视场F6对应于具有100度透镜的第三相机。在示例性实施例中,第一视场F4与第三视场F6重叠。同样,第二视场F5与第三视场F6重叠。在其他实施例中,第一相机可以包括52度小盖透镜,且第二相机可以包括52度小盖透镜。12A is a schematic plan view representation of an exemplary imaging system consistent with the three-camera embodiment of FIG. 10 . In FIG. 12A, the exemplary imaging system includes three cameras, each with a respective field of view. For example, the first field of view F4 corresponds to the first camera with a 52-degree small cap lens, the second field of view F5 corresponds to the second camera with a 52-degree small cap lens, and the third field of view F6 corresponds to the camera with a 100-degree lens. third camera. In an exemplary embodiment, the first field of view F4 overlaps with the third field of view F6. Likewise, the second field of view F5 overlaps with the third field of view F6. In other embodiments, the first camera may include a small 52 degree cap lens and the second camera may include a small 52 degree cap lens.

在图12A中,第一视场F4与车辆99的中心纵向轴线Cx偏离5度。在其他实施例中,F4可以与车辆99的中心纵向轴线Cx偏离5度至30度的范围,例如10度、15度、20度或25度。第二视场F5可以与车辆99的中心纵向轴线Cx偏离13.1度。在其他实施例中,F5可以与车辆99的中心纵向轴线Cx偏离5度至30度的范围,例如10度、15度、20度或25度。在示例性实施例中,第三视场F6与第一视场F4和第二视场F5重叠相等的量。在其他实施例中,第三视场F6基本垂直于车辆99的中心纵向轴线Cx。在其他实施例中,第三视场F6可以与车辆99的中心纵向轴线Cx偏离,即其光轴不垂直于中心纵向轴线Cx。In FIG. 12A , the first field of view F4 is offset by 5 degrees from the central longitudinal axis Cx of the vehicle 99 . In other embodiments, F4 may be offset from the central longitudinal axis Cx of the vehicle 99 by a range of 5 degrees to 30 degrees, such as 10 degrees, 15 degrees, 20 degrees or 25 degrees. The second field of view F5 may be offset from the central longitudinal axis Cx of the vehicle 99 by 13.1 degrees. In other embodiments, F5 may be offset from the central longitudinal axis Cx of the vehicle 99 by a range of 5 degrees to 30 degrees, such as 10 degrees, 15 degrees, 20 degrees or 25 degrees. In an exemplary embodiment, the third field of view F6 overlaps the first field of view F4 and the second field of view F5 by an equal amount. In other embodiments, the third field of view F6 is substantially perpendicular to the central longitudinal axis Cx of the vehicle 99 . In other embodiments, the third field of view F6 may be offset from the central longitudinal axis Cx of the vehicle 99 , ie its optical axis is not perpendicular to the central longitudinal axis Cx.

在一些实施例中,第三视场定向成使得其与第一视场F4和第二视场F6重叠相等的量。在示例性实施例中,组合视场为161.9度。在其他实施例中,组合视场可以更大或更小。例如,在其他示例性实施例中,组合视场的范围可以从100度到175度,例如110度、120度、130度、140度、150度、165度等。In some embodiments, the third field of view is oriented such that it overlaps the first field of view F4 and the second field of view F6 by an equal amount. In an exemplary embodiment, the combined field of view is 161.9 degrees. In other embodiments, the combined field of view may be larger or smaller. For example, in other exemplary embodiments, the combined field of view may range from 100 degrees to 175 degrees, such as 110 degrees, 120 degrees, 130 degrees, 140 degrees, 150 degrees, 165 degrees, and the like.

图12B是与图8的三相机实施例一致的示例性成像系统的示意性平面图表示。该示例性成像系统类似于图12B的实施例。因此,将不再详细说明类似的特征。在图12B中,示例性成像系统包括两个相机,每个相机具有各自的视场。例如,第一视场F4B对应于具有52度小盖透镜的第一相机,第二视场F5B对应于具有100度透镜的第二相机。在示例性实施例中,第一视场F4B与第二视场F6B重叠。12B is a schematic plan view representation of an exemplary imaging system consistent with the three-camera embodiment of FIG. 8 . This exemplary imaging system is similar to the embodiment of Figure 12B. Therefore, similar features will not be described in detail. In FIG. 12B, the exemplary imaging system includes two cameras, each with a respective field of view. For example, the first field of view F 4B corresponds to the first camera with a 52 degree small cover lens, and the second field of view F 5B corresponds to the second camera with a 100 degree lens. In the exemplary embodiment, the first field of view F4B overlaps the second field of view F6B.

在图12B中,第一视场F4B与车辆99的中心纵向轴线Cx偏离5度。在其他实施例中,F4B可以与车辆99的中心纵向轴线Cx偏离5度至30度的范围,例如10度、15度、20度或25度。在示例性实施例中,第二视场F6B与车辆99的中心纵向轴线Cx偏离43度。在其他实施例中,第二视场F6B基本垂直于车辆99的中心纵向轴线Cx。在其他实施例中,第二视场F6B可以与车辆99的中心纵向轴线Cx偏离,即其光轴不垂直于中心纵向轴线Cx。In FIG. 12B , the first field of view F 4B is offset by 5 degrees from the central longitudinal axis Cx of the vehicle 99 . In other embodiments, F 4B may be offset from the central longitudinal axis Cx of the vehicle 99 by a range of 5 degrees to 30 degrees, such as 10 degrees, 15 degrees, 20 degrees or 25 degrees. In the exemplary embodiment, the second field of view F6B is offset by 43 degrees from the central longitudinal axis Cx of the vehicle 99 . In other embodiments, the second field of view F6B is substantially perpendicular to the central longitudinal axis Cx of the vehicle 99 . In other embodiments, the second field of view F6B may be offset from the central longitudinal axis Cx of the vehicle 99 , ie its optical axis is not perpendicular to the central longitudinal axis Cx.

在示例性实施例中,组合视场为132度。在其他实施例中,组合视场可以更大或更小。例如,在其他示例性实施例中,组合视场的范围可以从100度到175度,例如110度、120度、130度、140度、150度、165度等。In an exemplary embodiment, the combined field of view is 132 degrees. In other embodiments, the combined field of view may be larger or smaller. For example, in other exemplary embodiments, the combined field of view may range from 100 degrees to 175 degrees, such as 110 degrees, 120 degrees, 130 degrees, 140 degrees, 150 degrees, 165 degrees, and the like.

图13是与图10的三相机实施例一致的示例性成像系统的示意性平面图表示。在图13中,示例性成像系统包括三个相机,每个相机具有各自的视场。例如,第一视场F7对应于具有52度小盖透镜的第一相机,第二视场F8对应于具有52度小盖透镜的第二相机,第三视场F9对应于具有100度透镜的第三相机。在示例性实施例中,第一视场F7与第三视场F9重叠17度。同样,第二视场F8与第三视场F9重叠17度。在其他实施例中,第一视场F7和第二视场F9可以在10度至35度的范围内分别与第三视场F9重叠。例如,约15度、20度、25度、30度或35度等。在图13中,第一视场F7与车辆99的中心纵向轴线Cx偏离5度,第二视场F8与车辆99的中心纵向轴线Cx偏离5度。在该示例性实施例中,第三视场F9基本垂直于车辆99的中心纵向轴线Cx,即其光轴(未示出)基本垂直于车辆99的中心纵向轴线Cx。在其他示例性实施例中,第三视场F9可以与车辆99的中心纵向轴线Cx偏离,即其光轴不垂直于中心纵向轴线Cx。13 is a schematic plan view representation of an exemplary imaging system consistent with the three-camera embodiment of FIG. 10 . In FIG. 13, the exemplary imaging system includes three cameras, each with a respective field of view. For example, the first field of view F7 corresponds to the first camera with a 52-degree small cap lens, the second field of view F8 corresponds to the second camera with a 52-degree small cap lens, and the third field of view F9 corresponds to the camera with a 100-degree lens. third camera. In an exemplary embodiment, the first field of view F7 overlaps the third field of view F9 by 17 degrees. Likewise, the second field of view F8 overlaps the third field of view F9 by 17 degrees. In other embodiments, the first field of view F7 and the second field of view F9 may respectively overlap with the third field of view F9 within a range of 10 degrees to 35 degrees. For example, about 15 degrees, 20 degrees, 25 degrees, 30 degrees or 35 degrees, etc. In FIG. 13 , the first field of view F7 is offset by 5 degrees from the central longitudinal axis Cx of the vehicle 99 and the second field of view F8 is offset by 5 degrees from the central longitudinal axis Cx of the vehicle 99 . In this exemplary embodiment, the third field of view F9 is substantially perpendicular to the central longitudinal axis Cx of the vehicle 99 , ie its optical axis (not shown) is substantially perpendicular to the central longitudinal axis Cx of the vehicle 99 . In other exemplary embodiments, the third field of view F9 may be offset from the central longitudinal axis Cx of the vehicle 99 , ie its optical axis is not perpendicular to the central longitudinal axis Cx.

图14是与图10的三相机实施例一致的示例性成像系统的示意性平面图表面。在图14中,示例性成像系统包括三个相机,每个相机具有各自的视场。例如,第一视场F10对应于具有52度小盖透镜的第一相机,第二视场F11对应于具有52度小盖透镜的第二相机,第三视场F12对应于具有100度透镜的第三相机。在示例性实施例中,第一视场F10与车辆99的中心纵向轴线Cx偏离7度,第二视场F11与车辆99的中心纵向轴线Cx偏离13度。在示例性实施例中,第三视场F12不是完全垂直于车辆99的中心纵向轴线Cx,而是可以说是基本垂直。14 is a schematic plan view surface of an exemplary imaging system consistent with the three-camera embodiment of FIG. 10 . In FIG. 14, the exemplary imaging system includes three cameras, each with a respective field of view. For example, the first field of view F10 corresponds to the first camera with a small cap lens of 52 degrees, the second field of view F11 corresponds to the second camera with a small cap lens of 52 degrees, and the third field of view F12 corresponds to the camera with a small cap lens of 100 degrees. third camera. In the exemplary embodiment, the first field of view F10 is offset by 7 degrees from the central longitudinal axis Cx of the vehicle 99 and the second field of view F11 is offset by 13 degrees from the central longitudinal axis Cx of the vehicle 99 . In the exemplary embodiment, the third field of view F12 is not completely perpendicular to the central longitudinal axis Cx of the vehicle 99 , but rather can be said to be substantially perpendicular.

图15和16是具有与所公开的实施例一致的组合视场的另一示例性成像系统的透视图表示。成像系统1400包括第一相机1402和第二相机1404。成像系统1400可以进一步包括容纳第一相机1402和第二相机1404的安装组件1408。安装组件1408可以配置成将成像模块1400附接到车辆上,使得第一相机1402和第二相机1404相对于车辆面向外部,如图15所示。尽管在图15中示出为位于车辆99的侧面,但成像系统1400可以位于车辆99的任何窗户(例如前窗、侧窗、后窗)之后,或者包括在或附接到车辆99的任何部件(例如立柱、保险杠、门面板、前灯、行李箱盖、挡泥板、行李架、横杆等)。例如,当包括在车辆部件中时,成像系统1400可以定位在部件的开口处提供的透明表面(例如玻璃、有机玻璃等)后面。15 and 16 are perspective representations of another exemplary imaging system having a combined field of view consistent with the disclosed embodiments. Imaging system 1400 includes a first camera 1402 and a second camera 1404 . Imaging system 1400 may further include a mount assembly 1408 that houses first camera 1402 and second camera 1404 . Mounting assembly 1408 may be configured to attach imaging module 1400 to a vehicle such that first camera 1402 and second camera 1404 face outward relative to the vehicle, as shown in FIG. 15 . Although shown in FIG. 15 as being located on the side of the vehicle 99, the imaging system 1400 may be located behind any window of the vehicle 99 (e.g., front windows, side windows, rear windows), or included in or attached to any component of the vehicle 99. (such as pillars, bumpers, door panels, headlights, trunk lids, fenders, luggage racks, cross bars, etc.). For example, when included in a vehicle component, imaging system 1400 may be positioned behind a transparent surface (eg, glass, plexiglass, etc.) provided at an opening of the component.

安装组件1408可以配置成将第一相机1402和第二相机1404定向成与地面平行。可替代地,安装组件1408可以配置成以相对于地面的偏离角例如5度、10度或15度来定向第一相机1402和第二相机1404。Mount assembly 1408 may be configured to orient first camera 1402 and second camera 1404 parallel to the ground. Alternatively, mount assembly 1408 may be configured to orient first camera 1402 and second camera 1404 at an off-angle relative to the ground, such as 5 degrees, 10 degrees, or 15 degrees.

成像系统1400可以进一步包括擦拭器组件1406,其包括至少一个擦拭器刮片。擦拭器组件1406可以配置成从第一相机1402和第二相机1404的各自视场清除遮挡物。在一些实施例中,擦拭器组件1406可以安装在车辆99的外部。擦拭器组件1406可以包括感测特征、定时器特征、电动致动器、铰接致动器、旋转致动器等。Imaging system 1400 can further include a wiper assembly 1406 that includes at least one wiper blade. The wiper assembly 1406 may be configured to clear obstructions from the respective fields of view of the first camera 1402 and the second camera 1404 . In some embodiments, wiper assembly 1406 may be mounted on the exterior of vehicle 99 . Wiper assembly 1406 may include sensing features, timer features, electric actuators, articulating actuators, rotary actuators, and the like.

图17和18是具有与所公开的实施例一致的组合视场的另一示例性成像系统的透视图表示。成像系统1800包括第一相机1802(为了说明目的而部分遮盖)、第二相机1804和第三相机1806。安装组件(参见图16的1408)配置成将成像模块1800附接到车辆99,使得第一相机1802、第二相机1804和第三相机1806相对于车辆99的侧后窗面向外部。在其他实施例中,安装组件(参见图16的1408)可以配置成将成像模块1800附接到车辆99的前挡风玻璃,如下面结合图19和20进一步详细讨论。仍然在其他实施例中,安装组件(参见图16的1408)可以配置成将成像模块1800附接到车辆99的部件,例如保险杠、立柱、门面板、前灯、行李箱盖、挡泥板、行李架、横杆等。17 and 18 are perspective representations of another exemplary imaging system having a combined field of view consistent with disclosed embodiments. Imaging system 1800 includes a first camera 1802 (partially hidden for illustration purposes), a second camera 1804 , and a third camera 1806 . The mounting assembly (see 1408 of FIG. 16 ) is configured to attach the imaging module 1800 to the vehicle 99 such that the first camera 1802 , the second camera 1804 and the third camera 1806 face outward relative to the side rear windows of the vehicle 99 . In other embodiments, the mounting assembly (see 1408 of FIG. 16 ) may be configured to attach the imaging module 1800 to the front windshield of the vehicle 99 , as discussed in further detail below in connection with FIGS. 19 and 20 . In still other embodiments, the mounting assembly (see 1408 of FIG. 16 ) can be configured to attach the imaging module 1800 to a component of the vehicle 99, such as a bumper, pillar, door panel, headlight, deck lid, fender , Luggage racks, cross bars, etc.

根据前挡风玻璃实施例,成像系统1800可以附接到车辆的前挡风玻璃。根据该示例性实施例,第三相机模块可以直接向前安装(垂直于穿过车辆99的后轮的轴线),使得第三相机的光轴也直接向前。第一相机可以安装到第三相机的左侧并且定向成使得第一相机的光轴与第三相机的光轴交叉(例如从左到右),并且第二相机安装到第三相机的右侧并且定向成使得第二相机的光轴与第三相机的光轴交叉(例如从右到左)。还应该理解,第一和第二相机的光轴也可以彼此交叉,尽管它们不一定需要交叉。According to a windshield embodiment, imaging system 1800 may be attached to a windshield of a vehicle. According to this exemplary embodiment, the third camera module may be mounted directly forward (perpendicular to the axis passing through the rear wheels of the vehicle 99 ), so that the optical axis of the third camera is also directly forward. The first camera may be mounted to the left of the third camera and oriented such that the optical axis of the first camera crosses the optical axis of the third camera (e.g., from left to right), and the second camera is mounted to the right of the third camera And oriented such that the optical axis of the second camera crosses the optical axis of the third camera (eg from right to left). It should also be understood that the optical axes of the first and second cameras may also intersect each other, although they need not necessarily intersect.

图19和20示出了用于车辆99的前挡风玻璃上的示例性成像系统的各种实施例。图19和20类似于先前说明的实施例,因此将不再详细说明相似的特征和相似性。应当理解,所有前述示例性范围同样适用于根据图19和20的实施例。19 and 20 illustrate various embodiments of an exemplary imaging system for use on the front windshield of a vehicle 99 . Figures 19 and 20 are similar to the previously described embodiments, and thus like features and similarities will not be described in detail again. It should be understood that all the aforementioned exemplary ranges apply equally to the embodiment according to FIGS. 19 and 20 .

图19是与图10和图11-14的三相机实施例一致的示例性成像系统的示意性平面图表示。在图19中,示例性成像系统包括三个相机,每个相机具有各自的视场。例如,第一视场F13对应于具有66度小盖透镜的第一相机,第二视场F14对应于具有66度小盖透镜的第二相机,第三视场F15对应于具有66度透镜的第三相机。在示例性实施例中,第一视场F13与第三视场F15重叠。同样,第二视场F14与第三视场F15重叠。19 is a schematic plan view representation of an exemplary imaging system consistent with the three-camera embodiment of FIGS. 10 and 11-14. In FIG. 19, the exemplary imaging system includes three cameras, each with a respective field of view. For example, the first field of view F13 corresponds to the first camera with a small cap lens of 66 degrees, the second field of view F14 corresponds to the second camera with a small cap lens of 66 degrees, and the third field of view F15 corresponds to the camera with a small cap lens of 66 degrees. third camera. In an exemplary embodiment, the first field of view F13 overlaps with the third field of view F15. Likewise, the second field of view F14 overlaps with the third field of view F15.

图20是与图19的三相机实施例一致的示例性成像系统的示意性平面图表示。在图20中,示例性成像系统包括三个相机,每个相机具有各自的视场。例如,第一视场F16对应于具有52度小盖透镜的第一相机,第二视场F17对应于具有52度小盖透镜的第二相机,第三视场F18对应于具有100度透镜的第三相机。在示例性实施例中,第一视场F16与第三视场F18重叠。同样,第二视场F17与第三视场F18重叠。在一些实施例中,组合视场可包括车辆前方180度。20 is a schematic plan view representation of an exemplary imaging system consistent with the three-camera embodiment of FIG. 19 . In FIG. 20, the exemplary imaging system includes three cameras, each with a respective field of view. For example, the first field of view F16 corresponds to the first camera with a small cap lens of 52 degrees, the second field of view F17 corresponds to the second camera with a small cap lens of 52 degrees, and the third field of view F18 corresponds to the camera with a small cap lens of 100 degrees. third camera. In an exemplary embodiment, the first field of view F16 overlaps with the third field of view F18. Likewise, the second field of view F17 overlaps with the third field of view F18. In some embodiments, the combined field of view may include 180 degrees in front of the vehicle.

图21是示例性成像系统2200的侧视图表示。图22是与图10的三相机实施例一致的成像系统2200的透视图。如图所示,成像系统2200安装在车辆99的内部后窗上。在其他实施例中,根据本公开的原理,成像系统220可以安装在车辆99的前挡风玻璃中。成像系统2200可以包括围绕相对较小且透明区域2204的防眩罩2202,该透明区域允许光穿过窗户到达相机2205、2207和2009。防眩罩2202可以包括深色粘合剂、遮光涂料、着色剂、偏振、用于曝光的印刷或涂漆区域或其任何组合。防眩罩2202的至少一个优点是可以减少来自由挡风玻璃的倾斜度(斜率)引起的入射光的眩光。防眩罩2202可以围绕相对较小且透明区域2204,其配置成向相机1005、1007和1009提供光圈,从而增加相机2205、2207和2209的景深,并期望使在宽范围距离处的多个不同对象能够保持聚焦。在一些实施例中,相对较小且透明区域2204可被着色或偏振。FIG. 21 is a side view representation of an exemplary imaging system 2200 . FIG. 22 is a perspective view of an imaging system 2200 consistent with the three-camera embodiment of FIG. 10 . As shown, imaging system 2200 is mounted on the interior rear window of vehicle 99 . In other embodiments, imaging system 220 may be mounted in the front windshield of vehicle 99 in accordance with the principles of the present disclosure. The imaging system 2200 may include an anti-glare shield 2202 surrounding a relatively small and transparent area 2204 that allows light to pass through the windows to the cameras 2205 , 2207 , and 2009 . The anti-glare cover 2202 may include dark adhesives, blackout paints, colorants, polarizers, printed or painted areas for exposure, or any combination thereof. At least one advantage of anti-glare shield 2202 is that glare from incident light caused by the slope (slope) of the windshield can be reduced. An anti-glare shield 2202 may surround a relatively small and transparent area 2204 configured to provide an aperture to the cameras 1005, 1007, and 1009, thereby increasing the depth of field of the cameras 2205, 2207, and 2209 and desirably enabling multiple different images over a wide range of distances. The subject is able to stay in focus. In some embodiments, relatively small and transparent regions 2204 may be colored or polarized.

图23是位于前挡风玻璃2300上的示例性成像系统2200的侧视图。如图所示,前挡风玻璃2300相对于笔直的水平表面例如地面具有40度的倾斜度。尽管图23的示例示出了具有40度的倾斜度的挡风玻璃2300,但是具有其他倾斜度(例如35度、38度、42度、45度等)的挡风玻璃与所公开的实施例一致。在示例性实施例中,成像系统2200包括三个相机2205、2207和2209(参见图22),但是仅示出了第一相机2205和第三相机2207,因为在侧视图中相机2209将被相机2205遮盖。如图所示,相机2205和2207处于不同的高度。因此,与每个相应的光轴(由相应的虚线示出)相关的矢量可以不在竖直平面中交叉。然而,与每个相应的光轴(由相应的虚线示出)相关的矢量可以在水平平面中交叉。此外,在一些实施例中,如前所述,成像系统220可以包括两个相机。在示例性实施例中,成像模块2350成形为具有大致等于挡风玻璃2300的近似倾斜度,例如相对于笔直的水平表面约40度。在其他实施例中,成像模块2350可以是大致矩形的,并且仅依赖于安装组件(为了便于理解未标记)以解决挡风玻璃2300的倾斜度。另外,安装组件(为了便于理解未标记)配置成相对于窗户固定地保持成像模块2350,使得相机2005、2007和2009相对于挡风玻璃2300向外伸出。此外,相机2205、2207和2209的光轴(由虚线箭头表示)可以向外伸出与地面平行。这样,成像模块和安装组件可以解决挡风玻璃的倾斜度(斜率)。应当理解,以相同的方式,成像模块和安装组件可以解决其他级别和类型的倾斜度,例如水平、竖直、水平和竖直的组合。FIG. 23 is a side view of an exemplary imaging system 2200 positioned on a front windshield 2300 . As shown, the front windshield 2300 has an inclination of 40 degrees relative to a straight horizontal surface such as the ground. While the example of FIG. 23 shows a windshield 2300 having a slope of 40 degrees, windshields having other slopes (eg, 35 degrees, 38 degrees, 42 degrees, 45 degrees, etc.) are compatible with the disclosed embodiments. unanimous. In the exemplary embodiment, imaging system 2200 includes three cameras 2205, 2207, and 2209 (see FIG. 22 ), but only first camera 2205 and third camera 2207 are shown because camera 2209 would be viewed by camera 2209 in side view. 2205 covered. As shown, cameras 2205 and 2207 are at different heights. Accordingly, the vectors associated with each respective optical axis (shown by the respective dashed lines) may not intersect in the vertical plane. However, the vectors associated with each respective optical axis (shown by respective dashed lines) may intersect in the horizontal plane. Additionally, in some embodiments, imaging system 220 may include two cameras, as previously described. In an exemplary embodiment, imaging module 2350 is shaped to have an approximate slope that is substantially equal to windshield 2300 , eg, about 40 degrees relative to a straight horizontal surface. In other embodiments, imaging module 2350 may be generally rectangular and rely solely on mounting components (not labeled for ease of understanding) to account for windshield 2300 slope. Additionally, a mounting assembly (not labeled for ease of understanding) is configured to hold imaging module 2350 fixedly relative to the window such that cameras 2005 , 2007 and 2009 protrude outwardly relative to windshield 2300 . In addition, the optical axes of the cameras 2205, 2207 and 2209 (indicated by dashed arrows) may protrude outward parallel to the ground. In this way, the imaging module and mounting assembly can account for the slope (slope) of the windshield. It should be understood that in the same manner, the imaging module and mounting assembly can address other levels and types of tilt, such as horizontal, vertical, combinations of horizontal and vertical.

图24是图23的示意性平面图表示。在图24中,示出了三个相机,即第一相机2205、第二相机2209和第三相机2207。如图所示,三个相机2205、2209和2207向外伸出穿过被防眩罩2202围绕的相对较小且空白透明区域。防眩罩配置成向相机1005、1007和1009提供光圈,从而增加相机2205、2207和2209的景深,并且使在宽范围距离处的多个不同对象能够保持聚焦。在一些实施例中,相对较小且透明区域2204可被着色或偏振。FIG. 24 is a schematic plan view representation of FIG. 23 . In Fig. 24, three cameras are shown, namely a first camera 2205, a second camera 2209 and a third camera 2207. As shown, three cameras 2205 , 2209 and 2207 protrude outward through a relatively small and blank transparent area surrounded by anti-glare shield 2202 . The anti-glare shield is configured to provide an aperture to the cameras 1005, 1007, and 1009, thereby increasing the depth of field of the cameras 2205, 2207, and 2209, and enabling multiple different objects at a wide range of distances to remain in focus. In some embodiments, relatively small and transparent regions 2204 may be colored or polarized.

在一实施例中,第一、第二和第三相机可以在防眩罩2202中使用相同的开口(例如图22)。在其他实施例中,每个相机可以具有其自己的防眩罩和相应的空白透明区域(未示出)。在某些示例中,相机使用的印刷或涂漆区域中的开口可以等于或小于具有与相机模块中的三个(或更多个)相机的组合FOV具有相似特征的视场的相机所需的区域。In one embodiment, the first, second, and third cameras may use the same opening in anti-glare cover 2202 (eg, FIG. 22 ). In other embodiments, each camera may have its own anti-glare shield and corresponding blank transparent area (not shown). In some examples, the opening in the printed or painted area used by the camera may be equal to or smaller than that required for a camera with a field of view having similar characteristics to the combined FOV of the three (or more) cameras in the camera module. area.

在一实施例中,第三相机可选地位于第一相机和第二相机之间。因此,应当理解,由于侧面相机(第一和第二相机2205、2207)可以组合以形成大于单独第三相机2207的视场的组合视场,因此可以通过利用组合FOV在第三相机中使用更薄的透镜,而不会影响相机模块的性能。这可能是特别有利的,因为相对于第一和第二相机2205、2207,第三相机2207可以附接成更靠近车辆的挡风玻璃(或窗户)。还将进一步理解,通过使中央相机2207更紧密地装配在挡风玻璃上,可以获得或支撑更大的视场覆盖范围,其中印刷或涂漆中的更小开口在窗户中。In an embodiment, a third camera is optionally located between the first camera and the second camera. Therefore, it should be appreciated that since the side cameras (first and second cameras 2205, 2207) can be combined to form a combined field of view that is larger than that of the third camera 2207 alone, a larger field of view can be used in the third camera by utilizing the combined FOV. Thin lenses without affecting the performance of the camera module. This may be particularly advantageous because the third camera 2207 may be attached closer to the windshield (or window) of the vehicle than the first and second cameras 2205, 2207. It will further be appreciated that by having the central camera 2207 fit more closely on the windshield, greater field of view coverage can be achieved or supported with smaller openings in the printing or painting in the window.

图25是与本公开一致的另一实施例的示意性平面图表示。在示例性实施例中,成像模块2512包括以半圆形状布置的相机,该半圆的半径与相对较小且空白透明区域2511重合。即,在示例性实施例中,成像模块配置成沿着半圆弧布置多个相机2501、2502、2503、2504、2505、2506和2507。如图所示,相机朝向半圆的半径定向。与其他公开的实施例一致,安装组件(未示出)可以配置成将成像模块2511附接到车辆的内部窗户(或任何其他部件),使得相机相对于车辆面向外部。在其中成像模块2512具有弓形形状的其他实施例中,对应的半径可以不定位成与相对较小且空白透明区域2511重合,例如它可以在任一侧上。在示例性实施例中,存在七个对称定向的相机2501、2502、2503、2504、2505、2506和2507,尽管在其他实施例中,可以存在更多或更少的相机。在一些实施例中,相机可以不沿着半圆弧对称地间隔开。25 is a schematic plan view representation of another embodiment consistent with the present disclosure. In an exemplary embodiment, the imaging module 2512 includes cameras arranged in the shape of a semicircle whose radius coincides with the relatively small and blank transparent area 2511 . That is, in an exemplary embodiment, the imaging module is configured to arrange a plurality of cameras 2501, 2502, 2503, 2504, 2505, 2506, and 2507 along a semicircular arc. As shown, the camera is oriented towards the radius of the semicircle. Consistent with other disclosed embodiments, a mounting assembly (not shown) may be configured to attach imaging module 2511 to an interior window (or any other component) of a vehicle such that the camera faces outward relative to the vehicle. In other embodiments where the imaging module 2512 has an arcuate shape, the corresponding radius may not be positioned to coincide with the relatively small and blank transparent area 2511, eg it may be on either side. In the exemplary embodiment, there are seven symmetrically oriented cameras 2501, 2502, 2503, 2504, 2505, 2506, and 2507, although in other embodiments, there may be more or fewer cameras. In some embodiments, the cameras may not be symmetrically spaced along a semicircular arc.

在示例性实施例中,每个相应的相机2501、2502、2503、2504、2505、2506和2507具有相应的视场(由对应的三角形区域表示)和从单个相对较小且透明开口2511向外伸出的相应的光轴(由对应的虚线表示)。在示例性实施例中,组合视场Fc为170度,尽管在其他实施例中它可以更多或更少,例如在约100度至180度的范围内。如图所示,当与组合视场Fc相比时,每个相机2501、2502、2503、2504、2505、2506和2507具有相对窄的视场。In the exemplary embodiment, each respective camera 2501 , 2502 , 2503 , 2504 , 2505 , 2506 , and 2507 has a respective field of view (represented by a corresponding triangular area) and a view outward from a single relatively small and transparent opening 2511 . The corresponding optical axis protrudes (indicated by the corresponding dashed line). In the exemplary embodiment, the combined field of view Fc is 170 degrees, although in other embodiments it may be more or less, such as in the range of about 100 to 180 degrees. As shown, each camera 2501, 2502, 2503, 2504, 2505, 2506, and 2507 has a relatively narrow field of view when compared to the combined field of view Fc.

如图所示,每个相应的视场至少部分地与半圆的半径重叠,并且半圆的半径位于单个相对较小且透明开口2511的中心位置处。另外,每个相应的光轴(由对应的虚线表示)在相应的交叉平面(为了便于理解而未示出)的至少一个相应的交叉点与每个其他相应的光轴交叉,与以上公开一致(参见图10)。这样,两个光轴的每个交叉对应于相应交叉平面的相应交叉点。在示例性实施例中,每个相应的光轴在至少一个水平平面中与每个其他相应的光轴交叉。在其他实施例中,每个相应的光轴在至少竖直平面中与每个其他相应的光轴交叉。仍然在其他实施例中,每个相应的光轴在水平平面和竖直平面中都与每个其他相应的光轴交叉。As shown, each respective field of view at least partially overlaps the radius of the semicircle, and the radius of the semicircle is centered on a single relatively small and transparent opening 2511 . Additionally, at least one respective intersection point of each respective optical axis (indicated by a corresponding dashed line) in a respective intersecting plane (not shown for ease of understanding) intersects each other respective optical axis, consistent with the above disclosure (See Figure 10). In this way, each intersection of two optical axes corresponds to a respective intersection point of a corresponding intersection plane. In an exemplary embodiment, each respective optical axis intersects every other respective optical axis in at least one horizontal plane. In other embodiments, each respective optical axis intersects every other respective optical axis in at least a vertical plane. In still other embodiments, each respective optical axis intersects every other respective optical axis in both the horizontal and vertical planes.

在示例性实施例中,相对较小且透明区域2511配置成向每个相机2501、2502、2503、2504、2505、2506和2507提供光圈,从而增加每个相机2501、2502、2503、2504、2505、2506和2507的景深,并且使在宽范围距离处的多个不同对象能够保持聚焦。此外,相对较小且透明区域2511小于具有大致等于组合视场Fc的广角视场的广角相机所需的相当透明区域。在一些实施例中,组合视场可以在车辆99的前方至少180度的量级上。如图所示,空白透明区域2511可以由部件2510描绘为边界。部件2510可以是诸如车辆部件之类的固体特征,例如立柱、保险杠、门面板、前灯、侧窗、前窗等。In an exemplary embodiment, relatively small and transparent area 2511 is configured to provide an aperture to each camera 2501, 2502, 2503, 2504, 2505, 2506, and 2507, thereby increasing the , 2506, and 2507 depth of field, and enables multiple different objects at a wide range of distances to remain in focus. Furthermore, the relatively small and transparent area 2511 is smaller than the relatively transparent area required for a wide-angle camera having a wide-angle field of view approximately equal to the combined field of view Fc. In some embodiments, the combined field of view may be on the order of at least 180 degrees in front of the vehicle 99 . As shown, an empty transparent area 2511 may be delineated by component 2510 as a border. Part 2510 may be a solid feature such as a vehicle part, such as a pillar, bumper, door panel, headlight, side window, front window, or the like.

在示例性实施例中,每个相机2501、2502、2503、2504、2505、2506和2507与多个相机中的至少一个紧邻的相机2501、2502、2503、2504、2505、2506或2507间隔开相等的距离。在其他实施例中,2501、2502、2503、2504、2505、2506和2507不间隔开相等的距离。在示例性实施例中,每个相应交叉平面的每个相应交叉点与最近的相机相距紧邻的相机之间的至少相等距离。如图所示,每个相应交叉平面的每个相应交叉点与最近的相机相距紧邻的相机之间的至多四倍相等距离。在其他实施例中,每个相应交叉平面的每个相应交叉点与最近的相机相距紧邻的相机之间的至多六倍相等距离。In an exemplary embodiment, each camera 2501, 2502, 2503, 2504, 2505, 2506, and 2507 is equally spaced from at least one immediately adjacent camera 2501, 2502, 2503, 2504, 2505, 2506, or 2507 of the plurality of cameras. distance. In other embodiments, 2501, 2502, 2503, 2504, 2505, 2506, and 2507 are not spaced an equal distance apart. In an exemplary embodiment, each respective intersection point of each respective intersection plane is at least an equal distance from the nearest camera between immediately adjacent cameras. As shown, each respective intersection point of each respective intersection plane is at most four times the equal distance from the nearest camera between immediately adjacent cameras. In other embodiments, each respective intersection point of each respective intersection plane is at most six times the equal distance between immediately adjacent cameras from the nearest camera.

在示例性实施例中,在其对应的光轴和相对较小且空白透明区域2511之间具有最大休止角的相机可具有偏振滤光器2515。偏振滤光器2515可有助于滤出或避免由相对较小且透明开口2511折射的入射光的反射。此外,通过使用多个相机,偏振滤光器2515可以仅附接到受影响最大的那些相机。另外,由于示例性实施例具有多个相机2501、2502、2503、2504、2505、2506和2507,因此可以针对该特定相机优化每个相机的曝光水平。此外,由于阳光照射到一特定相机的透镜而产生的眩光可能不会影响其他相机的图像。这样,多个相机2501、2502、2503、2504、2505、2506和2507可以配置成提供图像质量的冗余,并避免不良曝光的可能性。In an exemplary embodiment, the camera with the largest angle of repose between its corresponding optical axis and the relatively small and empty transparent region 2511 may have a polarizing filter 2515 . Polarizing filter 2515 may help filter out or avoid reflections of incident light refracted by relatively small and transparent opening 2511 . Also, by using multiple cameras, the polarizing filter 2515 can be attached to only those cameras that are most affected. Additionally, since the exemplary embodiment has multiple cameras 2501, 2502, 2503, 2504, 2505, 2506, and 2507, each camera's exposure level can be optimized for that particular camera. Also, glare from sunlight hitting the lens of one particular camera may not affect images from other cameras. In this way, multiple cameras 2501, 2502, 2503, 2504, 2505, 2506, and 2507 may be configured to provide redundancy in image quality and avoid the possibility of poor exposure.

在至少一个实施例中,约2mm的七个VGA分辨率相机立方体沿着半圆弧彼此间隔开,如图25所示。根据该实施例,半圆对应于直径约10mm的圆。该实施例的至少一个优点是它导致非常紧凑的高分辨率组合视场。In at least one embodiment, seven VGA resolution camera cubes of about 2 mm are spaced from each other along a semi-circular arc, as shown in FIG. 25 . According to this embodiment, the semicircle corresponds to a circle with a diameter of about 10 mm. At least one advantage of this embodiment is that it results in a very compact high resolution combined field of view.

在类似但替代的实施例中,可以将多个相机安装在半球中,其中12个相机的第一环位于相对较小且透明半球形开口附近。在该实施例中,第一环的12个相机中的每个可具有偏振滤光器。此外,可以包括8个相机的第二环、4个相机的第三环以及中央相机。In a similar but alternative embodiment, multiple cameras may be mounted in the hemisphere, with a first ring of 12 cameras positioned adjacent a relatively small and transparent hemispherical opening. In this embodiment, each of the 12 cameras of the first ring may have a polarizing filter. Additionally, a second ring of 8 cameras, a third ring of 4 cameras, and a central camera may be included.

本领域普通技术人员将理解,可以使用图像处理技术来组合两个或更多个相机的视场以提供组合视场,如可以与本文的公开内容一致。例如,至少一个处理设备(例如处理单元110)可执行程序指令以提供组合视场和/或分析由两个或更多个相机捕获的一个或多个图像。如果两个或更多个相机的各自视场部分地重叠,则任何合成的组合视场都可以包括两个或更多个相机的视场的重叠区域以及两个或更多个相机中任一个的不重叠区域。可以使用裁剪来减少或防止与本公开一致的成像技术中的冗余。Those of ordinary skill in the art will appreciate that image processing techniques may be used to combine the fields of view of two or more cameras to provide a combined field of view, as may be consistent with the disclosure herein. For example, at least one processing device (eg, processing unit 110 ) may execute program instructions to provide a combined field of view and/or analyze one or more images captured by two or more cameras. If the respective fields of view of two or more cameras partially overlap, any composite combined field of view may include the overlapping regions of the fields of view of the two or more cameras and any of the two or more cameras non-overlapping regions. Cropping can be used to reduce or prevent redundancy in imaging techniques consistent with the present disclosure.

本领域普通技术人员还将理解,所公开的实施例可以定位在主车辆内或上的任何位置,并且不必限于自主车辆。此外,在一些实施例中,可以在车辆99上安装多个成像系统。例如,可以将第一成像系统安装在右侧后窗上,可以将第二成像系统安装在左侧后窗上,可以将第三成像系统安装在后后窗上,可以将第四成像系统安装在前挡风玻璃上。根据该实施例,多个成像系统可以组合以形成围绕车辆99的完全全景的组合视场。Those of ordinary skill in the art will also appreciate that the disclosed embodiments may be positioned anywhere within or on the host vehicle and are not necessarily limited to the ego vehicle. Additionally, in some embodiments, multiple imaging systems may be installed on the vehicle 99 . For example, the first imaging system can be installed on the right rear window, the second imaging system can be installed on the left rear window, the third imaging system can be installed on the rear rear window, and the fourth imaging system can be installed on the left rear window. On the windshield. According to this embodiment, multiple imaging systems may combine to form a fully panoramic combined field of view around the vehicle 99 .

出于说明的目的已经给出了前述描述。它不是穷举的,并且不限于所公开的精确形式或实施例。通过考虑所公开实施例的说明书和实践,修改和改变对于本领域技术人员将是显而易见的。The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and alterations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments.

此外,尽管本文已经描述了说明性实施例,但是基于本公开,本领域技术人员要理解的是,任何和所有实施例的范围具有等同的元素、修改、省略、组合(例如各个实施例的各方面)、改变和/或替换。权利要求中的限制应基于权利要求中采用的语言来广义地解释,并且不限于本说明书中或在本申请进行过程中描述的示例。这些示例应被解释为非排他性的。此外,可以以任何方式修改所公开的方法的步骤,包括通过重新排序步骤和/或插入或删除步骤。因此,本说明书和实施例仅被认为是说明性的,真正的范围和精神由所附权利要求及其等同物的全部范围指示。Furthermore, while illustrative embodiments have been described herein, those skilled in the art would appreciate, based on this disclosure, that any and all embodiments have equivalent elements, modifications, omissions, combinations (eg, each of the various aspects), changes and/or substitutions. The limitations in the claims are to be interpreted broadly based on the language employed in the claims, and not limited to the examples described in this specification or during the prosecution of this application. These examples should be construed as non-exclusive. Furthermore, the steps of the disclosed methods may be modified in any way, including by reordering steps and/or inserting or deleting steps. Accordingly, the specification and examples are to be considered as illustrative only, with a true scope and spirit being indicated by the appended claims, along with their full scope of equivalents.

Claims (25)

1. An imaging system for a vehicle, comprising:
An imaging module;
a first camera coupled to the imaging module, the first camera including a first lens having a first field of view and a first optical axis;
a second camera coupled to the imaging module, the second camera including a second lens having a second field of view and a second optical axis, the second field of view being different from the first field of view, the angle of the first lens being greater than the angle of the second lens; and
a mounting assembly configured to attach the imaging module to an interior rear window of the vehicle such that the first and second cameras are positioned at the same elevation and the first and second cameras face outwardly relative to the vehicle,
wherein the first optical axis intersects the second optical axis at least one intersection point of an intersection plane,
wherein the first camera is focused at a first horizontal distance beyond the intersection of the intersection planes and the second camera is focused at a second horizontal distance beyond the intersection of the intersection planes, and
wherein the first and second fields of view at least partially overlap and form a combined field of view, and the first field of view is perpendicular to a central longitudinal axis of the vehicle;
wherein the intersection of the intersecting planes is spaced from the first and second cameras by a separation distance that is one to four times the shortest distance between the lens of the first camera and the lens of the second camera;
Wherein the internal rear window includes an antiglare shield surrounding a transparent area that allows light to pass through the window to the imaging module, the transparent area being smaller than a required transparent area of a wide-angle camera having a wide-angle field of view equal to a combined field of view of the first and second cameras.
2. The imaging system of claim 1, wherein the first optical axis intersects the second optical axis in a horizontal plane.
3. The imaging system of claim 1, wherein the first optical axis intersects the second optical axis in a vertical plane.
4. The imaging system of claim 1, wherein the first optical axis intersects the second optical axis in both a horizontal plane and a vertical plane.
5. The imaging system of claim 1, wherein the first horizontal distance and the second horizontal distance are equal distances.
6. The imaging system of claim 1, wherein the first horizontal distance is two or three times the second horizontal distance.
7. The imaging system of claim 1, wherein an intersection of the intersecting planes is located between the inner rear window and the first and second cameras.
8. The imaging system of claim 1, wherein an intersection of the intersecting planes is located a predetermined distance from an outer surface of the inner rear window.
9. The imaging system of claim 1, wherein the combined field of view comprises at least 180 degrees.
10. The imaging system of claim 1, further comprising:
a wiper assembly including at least one wiper blade configured to clear a barrier from respective fields of view of the first and second cameras.
11. The imaging system of claim 1, wherein the imaging module is configured to arrange the first and second cameras side-by-side.
12. The imaging system of claim 1, wherein the transparent region is configured to provide apertures to the first and second cameras to increase a depth of field of the first camera and a depth of field of the second camera and enable a plurality of different objects at a wide range of distances to remain in focus.
13. The imaging system of claim 1, wherein an intersection of the intersecting planes is at a center of the transparent region.
14. An imaging system for a vehicle, comprising:
an imaging module;
a first camera coupled to the imaging module, the first camera including a first lens having a first field of view and a first optical axis;
a second camera coupled to the imaging module, the second camera including a second lens having a second field of view and a second optical axis, the second field of view being different from the first field of view, the angle of the first lens being greater than the angle of the second lens;
A third camera coupled to the imaging module, the third camera including a third lens having a third field of view and a third optical axis, the third field of view being different from the first field of view;
a mounting assembly configured to attach the imaging module to an interior rear window of the vehicle such that the first, second, and third cameras are positioned at a same height and the first, second, and third cameras face outwardly relative to the vehicle, wherein:
the first optical axis intersects the second optical axis at least one first intersection point of a first intersection plane;
the first optical axis intersects the third optical axis at least one second intersection point of a second intersection plane;
the second optical axis intersects the third optical axis at least one third point of a third intersection plane; and
the first and second fields of view at least partially overlap with a third field of view, and the first, second, and third fields of view form a combined field of view, and the third field of view is perpendicular to a central longitudinal axis of the vehicle;
the first intersection of the first intersecting plane, the second intersection of the second intersecting plane, and the third intersection of the intersecting planes coincide with one another and form a coincident intersection of coincident intersecting planes;
The coincidence intersection point of the coincidence intersection plane is spaced apart from the first, second, and third cameras by a separation distance that is one to four times the shortest distance between the first, second, and third cameras;
the internal rear window includes an antiglare shield surrounding a transparent area that allows light to pass through the window to the imaging module, and the transparent area is less than a transparent area required for a wide-angle camera having a wide-angle field of view equal to a combined field of view of the first, second, and third cameras.
15. The imaging system of claim 14, wherein the first optical axis intersects the second optical axis in a horizontal plane, the first optical axis intersects the third optical axis in a horizontal plane, and the second optical axis intersects the third optical axis in a horizontal plane.
16. The imaging system of claim 14, wherein the first optical axis intersects the second optical axis in a vertical plane, the first optical axis intersects the third optical axis in a vertical plane, and the second optical axis intersects the third optical axis in a vertical plane.
17. The imaging system of claim 14, wherein the first optical axis intersects the second optical axis on both a horizontal axis and a vertical axis, the first optical axis intersects the third optical axis on both a horizontal axis and a vertical axis, and the second optical axis intersects the third optical axis on both a horizontal axis and a vertical axis.
18. The imaging system of claim 14, wherein the third camera is positioned centrally between the first and second cameras and is equidistantly spaced between the first and second cameras.
19. The imaging system of claim 14, wherein the mounting assembly is configured to attach the imaging module to a rear interior window.
20. The imaging system of claim 14, further comprising:
a wiper assembly including at least one wiper blade configured to clear a barrier from respective fields of view of the first, second, and third cameras.
21. The imaging system of claim 14, wherein the imaging module is configured to arrange the first, second, and third cameras side-by-side.
22. The imaging system of claim 19, wherein the rear interior window comprises an antiglare shield comprising a compound film and an antireflective material.
23. The imaging system of claim 22, wherein the rear interior window includes a transparent region surrounded by a non-transparent region.
24. The imaging system of claim 14, wherein the transparent region is configured to provide apertures for the first, second, and third cameras, thereby increasing a depth of field of the first camera, a depth of field of the second camera, and a depth of field of the third camera, and enabling a plurality of different objects at a wide range of distances to remain in focus.
25. The imaging system of claim 24, wherein the coincidence intersection point of the coincidence intersection plane coincides with a central region of the transparent region.
CN201880030715.8A 2017-05-10 2018-05-10 Cross-field of view for autonomous vehicle systems Expired - Fee Related CN110612431B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762504504P 2017-05-10 2017-05-10
US62/504,504 2017-05-10
PCT/US2018/032087 WO2018209103A2 (en) 2017-05-10 2018-05-10 Cross field of view for autonomous vehicle systems

Publications (2)

Publication Number Publication Date
CN110612431A CN110612431A (en) 2019-12-24
CN110612431B true CN110612431B (en) 2023-08-18

Family

ID=62685089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880030715.8A Expired - Fee Related CN110612431B (en) 2017-05-10 2018-05-10 Cross-field of view for autonomous vehicle systems

Country Status (6)

Country Link
US (2) US11463610B2 (en)
EP (1) EP3635332A2 (en)
JP (1) JP7334881B2 (en)
KR (1) KR102547600B1 (en)
CN (1) CN110612431B (en)
WO (1) WO2018209103A2 (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018170074A1 (en) 2017-03-14 2018-09-20 Starsky Robotics, Inc. Vehicle sensor system and method of use
US11393114B1 (en) * 2017-11-08 2022-07-19 AI Incorporated Method and system for collaborative construction of a map
US10957023B2 (en) * 2018-10-05 2021-03-23 Magna Electronics Inc. Vehicular vision system with reduced windshield blackout opening
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
CN111830669B (en) * 2019-04-17 2025-06-13 浙江舜宇光学有限公司 Electronic imaging device
DE102019207982A1 (en) * 2019-05-31 2020-12-03 Deere & Company Sensor arrangement for an agricultural vehicle
US11470265B2 (en) 2019-12-16 2022-10-11 Plusai, Inc. System and method for sensor system against glare and control thereof
US11724669B2 (en) 2019-12-16 2023-08-15 Plusai, Inc. System and method for a sensor protection system
US11077825B2 (en) 2019-12-16 2021-08-03 Plusai Limited System and method for anti-tampering mechanism
US11738694B2 (en) 2019-12-16 2023-08-29 Plusai, Inc. System and method for anti-tampering sensor assembly
US11650415B2 (en) 2019-12-16 2023-05-16 Plusai, Inc. System and method for a sensor protection mechanism
US11754689B2 (en) 2019-12-16 2023-09-12 Plusai, Inc. System and method for detecting sensor adjustment need
US11308641B1 (en) * 2019-12-20 2022-04-19 Ambarella International Lp Oncoming car detection using lateral emirror cameras
US12366855B2 (en) * 2020-02-26 2025-07-22 Polaris Industries Inc. Environment monitoring system and method for a towed recreational vehicle
US11295521B2 (en) * 2020-03-25 2022-04-05 Woven Planet North America, Inc. Ground map generation
CN115916610A (en) 2020-04-21 2023-04-04 北极星工业有限公司 System and method for operating an all-terrain vehicle
CZ309023B6 (en) * 2020-05-20 2021-12-01 České vysoké učení technické v Praze Method of using effects arising in images due to the relative movement of the image capture device relative to the scene or part of the scene and the device for this
JP7379297B2 (en) * 2020-08-27 2023-11-14 本田技研工業株式会社 mobile object
KR20230065928A (en) 2020-09-14 2023-05-12 엘지전자 주식회사 Vehicle image processing device and method for displaying visual information on a display included in the vehicle
WO2022067380A1 (en) * 2020-09-29 2022-04-07 Jude Elkanah Permall Traffic stop camera assembly
JP7321987B2 (en) * 2020-10-01 2023-08-07 ダイハツ工業株式会社 Vehicle compound eye camera
CN112257535B (en) * 2020-10-15 2022-04-08 天目爱视(北京)科技有限公司 Three-dimensional matching equipment and method for avoiding object
WO2022078437A1 (en) * 2020-10-15 2022-04-21 左忠斌 Three-dimensional processing apparatus and method between moving objects
WO2022085487A1 (en) * 2020-10-23 2022-04-28 ソニーグループ株式会社 Camera module, information processing system, information processing method, and information processing device
FR3121998B1 (en) * 2021-04-19 2023-03-03 Idemia Identity & Security France Optical capture device
KR102506812B1 (en) * 2021-08-27 2023-03-07 김배훈 Autonomous vehicle
JP7582214B2 (en) * 2022-01-11 2024-11-13 トヨタ自動車株式会社 Road marking detection device, notification system including same, and road marking detection method
US11772667B1 (en) 2022-06-08 2023-10-03 Plusai, Inc. Operating a vehicle in response to detecting a faulty sensor using calibration parameters of the sensor
EP4316912A1 (en) * 2022-08-03 2024-02-07 Aptiv Technologies Limited Vehicle camera, camera system, video processing method, software, and vehicle incorporating the same
JP2024055683A (en) * 2022-10-07 2024-04-18 株式会社Subaru Vehicle camera unit
CN120130123A (en) * 2022-10-28 2025-06-10 特斯拉公司 Windshield heater grille
CN116248810A (en) * 2022-12-09 2023-06-09 江苏北方湖光光电有限公司 A device and method for stitching low-illuminance images with a large field of view and a large field of view without blind spots in the front view of a vehicle
CN118200749A (en) * 2022-12-14 2024-06-14 Oppo广东移动通信有限公司 Camera assembly, robot and mounting method
JP7697452B2 (en) * 2022-12-28 2025-06-24 トヨタ自動車株式会社 Mobile System
KR102841691B1 (en) * 2024-12-12 2025-08-04 주식회사 이퀄라이저 Method and system for generating widescreen images based on optical axis intersection of multiple cameras

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5937212A (en) * 1996-11-15 1999-08-10 Canon Kabushiki Kaisha Image pickup apparatus
CN101037099A (en) * 2006-03-14 2007-09-19 福特全球技术公司 Device and method for outwardly looking ir camera mounted inside vehicles.

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5440337A (en) * 1993-11-12 1995-08-08 Puritan-Bennett Corporation Multi-camera closed circuit television system for aircraft
US8994822B2 (en) * 2002-08-28 2015-03-31 Visual Intelligence Lp Infrastructure mapping system and method
US7893957B2 (en) * 2002-08-28 2011-02-22 Visual Intelligence, LP Retinal array compound camera system
JP2004201489A (en) 2002-12-02 2004-07-15 Mitsuba Corp Actuator and wiper device
US7365774B2 (en) * 2002-12-13 2008-04-29 Pierre Louis Device with camera modules and flying apparatus provided with such a device
EP1790541A2 (en) 2005-11-23 2007-05-30 MobilEye Technologies, Ltd. Systems and methods for detecting obstructions in a camera field of view
US9182228B2 (en) * 2006-02-13 2015-11-10 Sony Corporation Multi-lens array system and method
US8120653B2 (en) 2006-12-29 2012-02-21 Mirror Lite Video monitoring system for school buses
JP4970296B2 (en) * 2008-01-21 2012-07-04 株式会社パスコ Orthophoto image generation method and photographing apparatus
JP4513906B2 (en) * 2008-06-27 2010-07-28 ソニー株式会社 Image processing apparatus, image processing method, program, and recording medium
DE102011103302A1 (en) * 2011-06-03 2012-12-06 Conti Temic Microelectronic Gmbh Camera system for a vehicle
WO2014031284A1 (en) 2012-08-21 2014-02-27 Visual Intelligence, LP Infrastructure mapping system and method
EP3143607A1 (en) * 2014-05-14 2017-03-22 Mobileye Vision Technologies Ltd. Systems and methods for curb detection and pedestrian hazard assessment
US9195895B1 (en) * 2014-05-14 2015-11-24 Mobileye Vision Technologies Ltd. Systems and methods for detecting traffic signs
DE102014209782A1 (en) 2014-05-22 2015-11-26 Robert Bosch Gmbh Camera system for detecting an environment of a vehicle
US10547825B2 (en) * 2014-09-22 2020-01-28 Samsung Electronics Company, Ltd. Transmission of three-dimensional video
JP2016143308A (en) 2015-02-04 2016-08-08 パイオニア株式会社 Notification device, control method, program, and storage medium
JP6176280B2 (en) * 2015-03-30 2017-08-09 トヨタ自動車株式会社 Arrangement structure of surrounding information detection sensor and autonomous driving vehicle
US10328934B2 (en) * 2017-03-20 2019-06-25 GM Global Technology Operations LLC Temporal data associations for operating autonomous vehicles

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5937212A (en) * 1996-11-15 1999-08-10 Canon Kabushiki Kaisha Image pickup apparatus
CN101037099A (en) * 2006-03-14 2007-09-19 福特全球技术公司 Device and method for outwardly looking ir camera mounted inside vehicles.

Also Published As

Publication number Publication date
WO2018209103A3 (en) 2018-12-20
JP2020522906A (en) 2020-07-30
EP3635332A2 (en) 2020-04-15
KR20200006556A (en) 2020-01-20
US20200112657A1 (en) 2020-04-09
CN110612431A (en) 2019-12-24
JP7334881B2 (en) 2023-08-29
WO2018209103A2 (en) 2018-11-15
KR102547600B1 (en) 2023-06-26
US12114056B2 (en) 2024-10-08
US20220360693A1 (en) 2022-11-10
US11463610B2 (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN110612431B (en) Cross-field of view for autonomous vehicle systems
JP7725804B2 (en) Vehicle navigation based on detected barriers
US12242287B2 (en) Systems and methods for navigating lane merges and lane splits
US10832063B2 (en) Systems and methods for detecting an object
JP7157054B2 (en) Vehicle navigation based on aligned images and LIDAR information
KR20230017365A (en) controlling host vehicle based on detected spacing between stationary vehicles
CN116783886A (en) Motion-based online calibration of multiple cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230818