CN106503653B - Region labeling method and device and electronic equipment - Google Patents
Region labeling method and device and electronic equipment Download PDFInfo
- Publication number
- CN106503653B CN106503653B CN201610921206.7A CN201610921206A CN106503653B CN 106503653 B CN106503653 B CN 106503653B CN 201610921206 A CN201610921206 A CN 201610921206A CN 106503653 B CN106503653 B CN 106503653B
- Authority
- CN
- China
- Prior art keywords
- obstacle
- image information
- road surface
- area
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
技术领域technical field
本申请涉及辅助驾驶领域,且更具体地,涉及一种区域标注方法、装置、电子设备、计算机程序产品和计算机可读存储介质。The present application relates to the field of assisted driving, and more particularly, to an area marking method, apparatus, electronic device, computer program product, and computer-readable storage medium.
背景技术Background technique
近年来随着交通工具(例如,车辆)产业的高速发展,交通事故已经成为全球性的问题,全世界每年交通事故的死伤人数估计超过50万人,因此集自动控制、人工智能、模式识别等技术于一体的辅助驾驶技术应运而生。辅助驾驶技术能够在用户驾驶交通工具时向用户提供必要的信息和/或警告,以避免产生碰撞、偏离轨迹等危险情况。在某些情况下,甚至可以使用辅助驾驶技术来自动地控制交通工具行进。In recent years, with the rapid development of the transportation industry (for example, vehicles), traffic accidents have become a global problem. The number of casualties in traffic accidents worldwide is estimated to exceed 500,000 each year. Therefore, automatic control, artificial intelligence, pattern recognition, etc. The technology-integrated assisted driving technology emerges as the times require. Assisted driving technology can provide the user with necessary information and/or warnings while driving the vehicle to avoid dangerous situations such as collision and deviation from the track. In some cases, assisted driving technology may even be used to automatically control vehicle travel.
一直以来,可行驶区域检测都是辅助驾驶技术中的关键部分之一。目前最常使用的是基于机器学习模型的检测方式。为了保证机器学习模型的准确性,需要预先采用大量的行驶环境的图像信息作为训练样本来对该模型进行离线训练。由于在行驶环境中往往存在诸如交通工具、行人等各种障碍物,所以在离线训练之前需要在训练样本中将这些障碍物区域标注出来,从而保留可供交通工具行驶的可行驶区域。目前,训练样本中障碍物区域的标注主要依赖于用户手工完成,也就是说,用户需要在大量图像信息中手动地找到各种障碍物个体,并对每个个体的大小、位置等进行标注。由于训练样本库一般需要达到几十万的规模,所以采用这种手动标注方式非常耗时,人力成本非常高,且不具扩展性。Driving area detection has always been one of the key parts of assisted driving technology. At present, the most commonly used detection methods are based on machine learning models. In order to ensure the accuracy of the machine learning model, it is necessary to use a large amount of image information of the driving environment as training samples in advance to conduct offline training of the model. Since there are often various obstacles such as vehicles and pedestrians in the driving environment, these obstacle areas need to be marked in the training samples before offline training, so as to reserve the drivable area for vehicles to travel. At present, the labeling of obstacle areas in training samples mainly depends on the user's manual completion, that is, the user needs to manually find various obstacle individuals in a large amount of image information, and label the size and location of each individual. Since the training sample database generally needs to reach the scale of hundreds of thousands, this manual labeling method is very time-consuming, the labor cost is very high, and it is not scalable.
因此,现有的区域标注技术是效率低下的。Therefore, existing region labeling techniques are inefficient.
发明内容SUMMARY OF THE INVENTION
为了解决上述技术问题,提出了本申请。本申请的实施例提供了一种区域标注方法、装置、电子设备、计算机程序产品和计算机可读存储介质,其能够自动地标注行驶环境中的障碍物区域。In order to solve the above technical problems, the present application is made. Embodiments of the present application provide an area marking method, apparatus, electronic device, computer program product, and computer-readable storage medium, which can automatically mark obstacle areas in a driving environment.
根据本申请的一个方面,提供了一种区域标注方法,包括:在生成用于训练机器学习模型的训练样本的过程中,获取成像器件所采集的行驶环境的图像信息;获取与所述图像信息在时间上同步的所述行驶环境的深度信息;以及根据所述深度信息在所述图像信息中标注所述行驶环境中的障碍物区域。According to an aspect of the present application, there is provided an area labeling method, comprising: in the process of generating a training sample for training a machine learning model, acquiring image information of a driving environment collected by an imaging device; acquiring image information related to the image information time-synchronized depth information of the driving environment; and marking an obstacle area in the driving environment in the image information according to the depth information.
根据本申请的另一方面,提供了一种区域标注装置,包括:图像获取单元,用于在生成用于训练机器学习模型的训练样本的过程中,获取成像器件所采集的行驶环境的图像信息;深度获取单元,用于获取与所述图像信息在时间上同步的所述行驶环境的深度信息;以及障碍标注单元,用于根据所述深度信息在所述图像信息中标注所述行驶环境中的障碍物区域。According to another aspect of the present application, an area labeling device is provided, including: an image acquisition unit configured to acquire image information of a driving environment collected by an imaging device during a process of generating a training sample for training a machine learning model ; a depth acquisition unit for acquiring depth information of the driving environment synchronized with the image information in time; and an obstacle marking unit for marking the driving environment in the image information according to the depth information obstacle area.
根据本申请的另一方面,提供了一种电子设备,包括:处理器;存储器;以及存储在所述存储器中的计算机程序指令,所述计算机程序指令在被所述处理器运行时使得所述处理器执行上述的区域标注方法。According to another aspect of the present application, there is provided an electronic device comprising: a processor; a memory; and computer program instructions stored in the memory, the computer program instructions, when executed by the processor, cause the The processor executes the region labeling method described above.
根据本申请的另一方面,提供了一种计算机程序产品,包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行上述的区域标注方法。According to another aspect of the present application, a computer program product is provided, comprising computer program instructions, the computer program instructions, when executed by a processor, cause the processor to execute the above-mentioned area marking method.
根据本申请的另一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行上述的区域标注方法。According to another aspect of the present application, a computer-readable storage medium is provided, on which computer program instructions are stored, the computer program instructions, when executed by a processor, cause the processor to execute the above-mentioned area marking method.
与现有技术相比,采用根据本申请实施例的区域标注方法、装置、电子设备、计算机程序产品和计算机可读存储介质,可以在生成用于训练机器学习模型的训练样本的过程中,获取成像器件所采集的行驶环境的图像信息,获取与所述图像信息在时间上同步的所述行驶环境的深度信息,并且根据所述深度信息在所述图像信息中标注所述行驶环境中的障碍物区域。因此,与如现有技术中对障碍物区域进行人工标注的情况相比,可以自动地标注行驶环境中的障碍物区域,提高了区域标注的效率。Compared with the prior art, by using the region labeling method, device, electronic device, computer program product and computer-readable storage medium according to the embodiments of the present application, in the process of generating the training sample for training the machine learning model, the Image information of the driving environment collected by the imaging device, obtain depth information of the driving environment that is synchronized with the image information in time, and mark obstacles in the driving environment in the image information according to the depth information object area. Therefore, compared with the situation of manually labeling the obstacle area as in the prior art, the obstacle area in the driving environment can be automatically labeled, which improves the efficiency of area labeling.
附图说明Description of drawings
通过结合附图对本申请实施例进行更详细的描述,本申请的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本申请实施例的进一步理解,并且构成说明书的一部分,与本申请实施例一起用于解释本申请,并不构成对本申请的限制。在附图中,相同的参考标号通常代表相同部件或步骤。The above and other objects, features and advantages of the present application will become more apparent from the detailed description of the embodiments of the present application in conjunction with the accompanying drawings. The accompanying drawings are used to provide a further understanding of the embodiments of the present application, constitute a part of the specification, and are used to explain the present application together with the embodiments of the present application, and do not constitute a limitation to the present application. In the drawings, the same reference numbers generally refer to the same components or steps.
图1图示了根据本申请实施例的成像器件所采集到的行驶环境的图像信息的示意图。FIG. 1 illustrates a schematic diagram of image information of a driving environment collected by an imaging device according to an embodiment of the present application.
图2图示了根据本申请第一实施例的区域标注方法的流程图。FIG. 2 illustrates a flowchart of a region labeling method according to the first embodiment of the present application.
图3图示了根据本申请实施例的获取深度信息步骤的流程图。FIG. 3 illustrates a flowchart of a step of acquiring depth information according to an embodiment of the present application.
图4图示了根据本申请实施例的标注障碍物步骤的流程图。FIG. 4 illustrates a flowchart of a step of marking an obstacle according to an embodiment of the present application.
图5图示了根据本申请第二实施例的区域标注方法的流程图。FIG. 5 illustrates a flowchart of a region labeling method according to the second embodiment of the present application.
图6图示了根据本申请实施例的标注可行驶区域步骤的流程图。FIG. 6 illustrates a flowchart of the steps of marking a drivable area according to an embodiment of the present application.
图7A图示了根据本申请实施例的在图1所示的图像信息中结合有深度信息和用户输入的示意图,图7B图示了根据本申请实施例的在图1所示的图像信息中标注有障碍物区域和可行驶区域的示意图。7A illustrates a schematic diagram of combining depth information and user input in the image information shown in FIG. 1 according to an embodiment of the present application, and FIG. 7B illustrates a schematic diagram of the image information shown in FIG. 1 according to an embodiment of the present application Schematic annotated with obstacle areas and drivable areas.
图8图示了根据本申请实施例的区域标注装置的框图。FIG. 8 illustrates a block diagram of a region labeling apparatus according to an embodiment of the present application.
图9图示了根据本申请实施例的电子设备的框图。FIG. 9 illustrates a block diagram of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
下面,将参考附图详细地描述根据本申请的示例实施例。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是本申请的全部实施例,应理解,本申请不受这里描述的示例实施例的限制。Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
申请概述Application overview
如上所述,在现有技术中,训练样本中障碍物区域的标注主要依赖于用户手工完成,因此存在操作复杂且效率低的问题。As mentioned above, in the prior art, the labeling of obstacle regions in training samples mainly relies on manual completion by the user, so there are problems of complicated operation and low efficiency.
针对该技术问题,本申请的基本构思是提出一种新的区域标注方法、装置、电子设备、计算机程序产品和计算机可读存储介质,其可以在标注过程中,通过结合来自深度传感器的深度信息,自动在成像器件所采集的图像信息中将障碍物区域标注出来,无需用户手工操作,从而降低了标注成本、提高了标注速度。In view of this technical problem, the basic idea of this application is to propose a new area labeling method, apparatus, electronic device, computer program product and computer-readable storage medium, which can combine depth information from a depth sensor during the labeling process. , the obstacle area is automatically marked in the image information collected by the imaging device, without the need for manual operation by the user, thereby reducing the cost of marking and improving the speed of marking.
本申请的实施例可以应用于各种场景。例如,本申请的实施例可以用于对交通工具所处的行驶环境中的障碍物区域进行标注。例如,该交通工具可以是不同的类型,其可以是车辆、飞行器、航天器、水中运载工具等。为了便于说明,下面将以车辆作为交通工具的示例来继续描述。The embodiments of the present application may be applied to various scenarios. For example, the embodiments of the present application can be used to mark the obstacle area in the driving environment where the vehicle is located. For example, the vehicle may be of a different type, which may be a vehicle, aircraft, spacecraft, underwater vehicle, or the like. For the convenience of explanation, the description will be continued by taking a vehicle as an example of a vehicle.
例如,为了使得车辆能够在实际行驶中确定作为其行驶环境的道路路面中的各种障碍物、进而实现辅助驾驶等目的,需要预先采用大量的行驶环境的图像信息作为训练样本来对车辆中的机器学习模型进行离线训练。为此,可以事先在测试车辆上装备一个或多个成像器件,用于采集关于不同行驶环境的大量图像信息。当然,本申请不限于此。例如,所述图像信息也可以来自于设置在固定位置的监控摄像头、或者直接来自于互联网等等。For example, in order to enable the vehicle to determine various obstacles on the road surface as its driving environment during actual driving, and then achieve the purpose of assisting driving, etc., it is necessary to use a large amount of image information of the driving environment as training samples in advance. Machine learning models are trained offline. For this purpose, the test vehicle can be equipped in advance with one or more imaging devices for collecting a large amount of image information about the different driving environments. Of course, the present application is not limited to this. For example, the image information may also come from a surveillance camera set at a fixed position, or directly from the Internet, and so on.
图1图示了根据本申请实施例的成像器件所采集到的行驶环境的图像信息的示意图。FIG. 1 illustrates a schematic diagram of image information of a driving environment collected by an imaging device according to an embodiment of the present application.
如图1所示,装备有成像器件的测试车辆所获取的图像信息指示出该车辆正行驶在作为其典型性行驶环境的道路路面上。在该道路路面上存在3个障碍物(位于不同距离的作为其他车辆的障碍物1、障碍物2、和障碍物3)、4条车道线(从左到右的车道线1、车道线2、车道线3、和车道线4)、和1条交界线(道路与草地之间的交界线1)等物体。As shown in FIG. 1 , image information obtained by a test vehicle equipped with an imaging device indicates that the vehicle is driving on a road surface as its typical driving environment. There are 3 obstacles (obstruction 1, obstacle 2, and obstacle 3 as other vehicles located at different distances), 4 lane lines (lane line 1, lane line 2 from left to right) on the road surface , lane line 3, and lane line 4), and a boundary line (the boundary line 1 between the road and the grass) and other objects.
现有的障碍物区域标注方法通常需要用户基于人眼识别来寻找图像信息中的各种障碍物个体,并使用鼠标圈选的方式对每个个体的大小、位置等进行标注。在通常情况下,这种障碍物标注方法是简单且有效的。然而,用于对机器学习模型进行离线训练的样本库往往包括大量的图像信息,如果每一幅图像都需要用户人眼分辨、手动标识,则必定耗时耗力,且由于人工操作可能出现漏标或错标,因此,现有的障碍物区域标注可能不够准确,从而导致后续的机器学习结果产生错误,进而可能导致在在线使用时车辆对实际的道路状况产生错误的判断,产生交通安全隐患。The existing obstacle area labeling methods usually require users to find various obstacle individuals in the image information based on human eye recognition, and use the mouse to circle the way to label the size and location of each individual. In general, this obstacle labeling method is simple and effective. However, the sample library used for offline training of machine learning models often includes a large amount of image information. If each image needs to be distinguished and manually identified by the user's eyes, it will be time-consuming and labor-intensive, and there may be leakage due to manual operation. Therefore, the existing obstacle area labeling may not be accurate enough, resulting in errors in subsequent machine learning results, which may lead to wrong judgment of the actual road conditions when the vehicle is used online, resulting in traffic safety hazards. .
为此,在本申请的实施例中,在生成用于训练机器学习模型的训练样本的过程中,获取成像器件所采集的行驶环境的图像信息,获取与所述图像信息在时间上同步的所述行驶环境的深度信息,并且根据所述深度信息在所述图像信息中标注所述行驶环境中的障碍物区域。因此,根据该基本构思的本申请实施例能够自动地标注行驶环境中的障碍物区域,提高了区域标注的效率。To this end, in the embodiments of the present application, in the process of generating the training samples for training the machine learning model, the image information of the driving environment collected by the imaging device is acquired, and all the images that are synchronized in time with the image information are acquired. depth information of the driving environment, and an obstacle area in the driving environment is marked in the image information according to the depth information. Therefore, the embodiments of the present application according to the basic concept can automatically mark the obstacle area in the driving environment, which improves the efficiency of area marking.
当然,尽管上面以交通工具为例对本申请的实施例进行了说明,但是本申请不限于此。本申请的实施例可以应用于诸如对可移动机器人、固定监控摄像头等各种在线电子设备所处的行驶环境中的障碍物区域进行标注。Of course, although the embodiments of the present application are described above by taking a vehicle as an example, the present application is not limited thereto. The embodiments of the present application can be applied to, for example, marking obstacle areas in a driving environment where various online electronic devices such as mobile robots and fixed surveillance cameras are located.
下面,将结合图1的应用场景,参考附图来描述根据本申请的各个实施例。Hereinafter, various embodiments according to the present application will be described with reference to the accompanying drawings in conjunction with the application scenario of FIG. 1 .
示例性方法Exemplary method
图2图示了根据本申请第一实施例的区域标注方法的流程图。FIG. 2 illustrates a flowchart of a region labeling method according to the first embodiment of the present application.
如图2所示,根据本申请第一实施例的区域标注方法可以包括:As shown in FIG. 2, the area labeling method according to the first embodiment of the present application may include:
在步骤S110中,在生成用于训练机器学习模型的训练样本的过程中,获取成像器件所采集的行驶环境的图像信息。In step S110, in the process of generating training samples for training the machine learning model, image information of the driving environment collected by the imaging device is acquired.
为了对机器学习模型进行离线训练,需要预先对作为训练样本的行驶环境的图像信息进行标注,寻找其中的障碍物区域。例如,可以通过一个或多个成像器件来采集行驶环境的大量图像信息。例如,在成像器件装备在测试车辆(或称之为,当前车辆)上的应用场景中,可以通过该成像器件获取当前车辆的行驶方向中的道路路面的图像信息,例如,如图1所示。In order to train the machine learning model offline, it is necessary to mark the image information of the driving environment as the training sample in advance, and find the obstacle area in it. For example, a large amount of image information of the driving environment can be collected by one or more imaging devices. For example, in an application scenario where an imaging device is equipped on a test vehicle (or called the current vehicle), the image information of the road surface in the current driving direction of the vehicle can be obtained through the imaging device, for example, as shown in FIG. 1 .
例如,该成像器件可以是用于捕捉图像信息的图像传感器,其可以是摄像头或摄像头阵列。例如,图像传感器所采集到的图像信息可以是连续图像帧序列(即,视频流)或离散图像帧序列(即,在预定采样时间点采样到的图像数据组)等。例如,该摄像头可以是如单目相机、双目相机、多目相机等,另外,其可以用于捕捉灰度图,也可以捕捉带有颜色信息的彩色图。当然,本领域中已知的以及将来可能出现的任何其他类型的相机都可以应用于本申请,本申请对其捕捉图像的方式没有特别限制,只要能够获得输入图像的灰度或颜色信息即可。为了减小后续操作中的计算量,在一个实施例中,可以在进行分析和处理之前,将彩色图进行灰度化处理。For example, the imaging device may be an image sensor for capturing image information, which may be a camera or an array of cameras. For example, the image information collected by the image sensor may be a sequence of continuous image frames (ie, a video stream) or a sequence of discrete image frames (ie, a set of image data sampled at a predetermined sampling time point) and the like. For example, the camera can be a monocular camera, a binocular camera, a multi-eye camera, etc. In addition, it can be used to capture a grayscale image or a color image with color information. Of course, any other types of cameras known in the art and that may appear in the future can be applied to the present application, and the present application does not specifically limit the way of capturing images, as long as the grayscale or color information of the input image can be obtained . In order to reduce the amount of computation in subsequent operations, in one embodiment, the color image may be grayscaled before analysis and processing.
在步骤S120中,获取与所述图像信息在时间上同步的所述行驶环境的深度信息。In step S120, depth information of the driving environment synchronized in time with the image information is acquired.
在步骤S110之前、之后、或与之同时地,可以另外获取与图像信息同时获取的道路路面的深度信息。Before, after, or at the same time as step S110, the depth information of the road surface acquired simultaneously with the image information may be additionally acquired.
例如,深度传感器可以是任何合适的传感器,比如基于双目视差图测量深度的双目相机或基于红外线的照射测量深度的红外线深度传感器(或激光深度传感器)。例如,深度传感器可以生成诸如深度图或激光点云之类的深度信息,以用于测量障碍物相对于当前车辆的位置。深度传感器可以收集任何与障碍物距当前车辆的距离相关的合适的深度信息。例如,深度传感器可以收集关于障碍物在当前车辆前面多远处的信息。更进一步地,深度传感器除了距离信息之外,还可以收集诸如关于障碍物是在当前车辆右边还是左边的信息之类的方向信息。深度传感器还可以收集在不同时间点关于障碍物距当前车辆的距离的信息,以确定该障碍物是朝向还是远离当前车辆运动。下面,将以激光深度传感器为例继续说明。For example, the depth sensor may be any suitable sensor, such as a binocular camera that measures depth based on a binocular disparity map or an infrared depth sensor (or a laser depth sensor) that measures depth based on infrared illumination. For example, depth sensors can generate depth information such as depth maps or laser point clouds that can be used to measure the position of obstacles relative to the current vehicle. The depth sensor can collect any suitable depth information related to the distance of the obstacle from the current vehicle. For example, a depth sensor can gather information about how far an obstacle is in front of the current vehicle. Still further, the depth sensor can collect directional information such as information on whether an obstacle is to the right or left of the current vehicle, in addition to distance information. The depth sensor can also collect information about the distance of the obstacle from the current vehicle at various points in time to determine whether the obstacle is moving towards or away from the current vehicle. In the following, the description will continue by taking the laser depth sensor as an example.
图3图示了根据本申请实施例的获取深度信息步骤的流程图。FIG. 3 illustrates a flowchart of a step of acquiring depth information according to an embodiment of the present application.
如图3所示,步骤S120可以包括:As shown in FIG. 3, step S120 may include:
在子步骤S121中,确定所述成像器件采集所述图像信息的采集时间。In sub-step S121, the acquisition time for the imaging device to acquire the image information is determined.
例如,图像信息中可以包括含有采集时间之类的各种属性信息。通过该属性信息即可确定出所述图像信息的采集时间。For example, the image information may include various attribute information such as acquisition time. The acquisition time of the image information can be determined through the attribute information.
在子步骤S122中,获取所述当前车辆的深度传感器在所述采集时间所采集的所述行驶方向中的道路路面的深度信息。In sub-step S122, the depth information of the road surface in the driving direction collected by the depth sensor of the current vehicle at the collection time is obtained.
类似地,深度信息中也可以包括含有采集时间之类的各种属性信息。通过图像信息的采集时间,即可确定出在与之相同时间点上采集的深度信息。Similarly, the depth information may also include various attribute information such as acquisition time. Through the acquisition time of the image information, the depth information acquired at the same time point can be determined.
需要说明的是,本申请不限于此。例如,还可以在成像器件和深度传感器的采集阶段,将在相同时间点采集到的采集信息和深度信息作为一对相关信息存储在一起,以便稍后获取。It should be noted that the present application is not limited to this. For example, in the acquisition stage of the imaging device and the depth sensor, the acquisition information and the depth information acquired at the same time point can also be stored together as a pair of related information for later acquisition.
返回参考图2,在步骤S130中,根据所述深度信息在所述图像信息中标注所述行驶环境中的障碍物区域。Referring back to FIG. 2, in step S130, an obstacle area in the driving environment is marked in the image information according to the depth information.
在获得对应的图像信息和深度信息之后,可以通过各种方法来将两者结合以检测所述行驶环境中的障碍物及其区域。After the corresponding image information and depth information are obtained, various methods can be used to combine the two to detect obstacles and their regions in the driving environment.
图4图示了根据本申请实施例的标注障碍物步骤的流程图。FIG. 4 illustrates a flowchart of a step of marking an obstacle according to an embodiment of the present application.
如图4所示,步骤S130可以包括:As shown in FIG. 4, step S130 may include:
在子步骤S131中,根据所述深度信息来判断在所述道路路面上是否存在障碍物。In sub-step S131, it is determined whether there is an obstacle on the road surface according to the depth information.
例如,所述障碍物可以为以下各项中的至少一个:行人、动物、遗撒物、警示牌、隔离墩、和其他车辆。For example, the obstacle may be at least one of: pedestrians, animals, litter, warning signs, barriers, and other vehicles.
由于激光深度传感器是通过发射特别短的光脉冲,并测量此光脉冲从发射到被障碍物反射回来的时间,通过测时间间隔来计算与物体之间的距离,所以根据该传感器检测到的激光点云的位置和返回时间,可以判断出在所述道路路面上是否存在障碍物、以及该障碍物与当前车辆之间的位置关系。Since the laser depth sensor transmits a very short light pulse, and measures the time from when the light pulse is emitted to when it is reflected back by the obstacle, the distance to the object is calculated by measuring the time interval, so according to the laser detected by the sensor, the distance to the object is calculated. The position and return time of the point cloud can determine whether there is an obstacle on the road surface, and the positional relationship between the obstacle and the current vehicle.
在子步骤S132中,响应于存在障碍物,根据所述障碍物的深度信息在所述图像信息中确定所述障碍物在所述道路路面上的投影区域。In sub-step S132, in response to the existence of an obstacle, a projection area of the obstacle on the road surface is determined in the image information according to the depth information of the obstacle.
一旦根据激光点云判断出在道路路面上存在障碍物,例如,可以根据该激光点云进行聚簇,以大致识别出可能的障碍物个数,并且根据每个障碍物的深度信息来将其对应到图像信息中,以确定所述障碍物在所述道路路面上的投影区域。需要说明的是,尽管在图像信息中多个障碍物之间可能存在交叠,但是,由于车辆行驶的规律决定了在各个障碍物之间为了保持安全必须相距一定的距离,这就使得根据深度信息来进行聚簇的结果往往比根据图像信息来进行聚簇的结果要更为准确。Once it is determined that there are obstacles on the road surface according to the laser point cloud, for example, clustering can be performed according to the laser point cloud to roughly identify the number of possible obstacles, and the depth information of each obstacle can be used to classify the obstacles. Corresponding to the image information to determine the projection area of the obstacle on the road surface. It should be noted that although there may be overlaps between multiple obstacles in the image information, due to the driving law of vehicles, each obstacle must be kept at a certain distance in order to maintain safety, which makes the depth of the obstacles. The results of clustering based on information are often more accurate than those based on image information.
具体地,例如,首先,可以根据所述障碍物的深度信息和所述深度传感器的标定参数来确定所述障碍物相对于所述当前车辆的三维坐标。Specifically, for example, first, the three-dimensional coordinates of the obstacle relative to the current vehicle may be determined according to the depth information of the obstacle and the calibration parameters of the depth sensor.
由于制造公差,在将深度传感器安装到车辆上之后,每辆车都必须执行独立的终检线传感器校准(end-of-line sensor calibration)或后续市场传感器调节,以便确定深度传感器在该车辆上的俯仰角等标定参数,从而最终用于辅助驾驶等目的。例如,所述标定参数可以是指所述深度传感器的外参矩阵,其可以包括所述深度传感器相对于所述当前车辆的形式方向的俯仰角和倾斜角等中的一个或多个。可以根据该校准后的俯仰角等和预设的算法,基于障碍物的深度信息来计算与障碍物相关的每个激光点的三维坐标,例如,坐标(x,y,z)。该三维坐标可以是障碍物在世界坐标系下的绝对坐标,也可以是与当前车辆的参考位置之间的相对坐标。Due to manufacturing tolerances, after a depth sensor is installed on a vehicle, each vehicle must perform an independent end-of-line sensor calibration or post-market sensor adjustment in order to determine that the depth sensor is on that vehicle The pitch angle and other calibration parameters are finally used to assist driving and other purposes. For example, the calibration parameter may refer to an extrinsic parameter matrix of the depth sensor, which may include one or more of a pitch angle and a tilt angle of the depth sensor relative to the current vehicle's formal direction. The three-dimensional coordinates of each laser point related to the obstacle, for example, coordinates (x, y, z), can be calculated based on the depth information of the obstacle according to the calibrated pitch angle and the like and a preset algorithm. The three-dimensional coordinates may be absolute coordinates of the obstacle in the world coordinate system, or may be relative coordinates with the reference position of the current vehicle.
然后,可以将所述障碍物的三维坐标中的高度坐标z设置为零,以生成向所述道路路面上投影后的三维坐标。也就是说,可以将与障碍物相关的每个激光点的三维坐标修改为(x,y,0)。Then, the height coordinate z in the three-dimensional coordinates of the obstacle may be set to zero to generate three-dimensional coordinates projected onto the road surface. That is, the three-dimensional coordinates of each laser point related to the obstacle can be modified to (x, y, 0).
最后,可以根据所述投影后的三维坐标和所述成像器件的标定参数来在所述图像信息中确定所述障碍物在所述道路路面上的投影区域。Finally, the projected area of the obstacle on the road surface may be determined in the image information according to the projected three-dimensional coordinates and the calibration parameters of the imaging device.
与深度传感器类似地,由于制造公差,在将成像器件安装到车辆上之后,也需要首先确定该成像器件在该车辆上的俯仰角等标定参数。因此,可以根据成像器件相对于所述当前车辆的行驶方向的俯仰角等和预设的算法,来将与障碍物相关的每个激光点的投影后的三维坐标转换为图像信息中的各个图像坐标,并将各个图像坐标的最外围区域(即,最大轮廓区域)确定为所述障碍物在所述道路路面上的投影区域。Similar to the depth sensor, due to manufacturing tolerances, after the imaging device is installed on the vehicle, calibration parameters such as the pitch angle of the imaging device on the vehicle also need to be determined first. Therefore, the projected three-dimensional coordinates of each laser point related to the obstacle can be converted into each image in the image information according to the pitch angle of the imaging device relative to the current driving direction of the vehicle and the preset algorithm. coordinates, and the outermost peripheral area (ie, the largest contour area) of each image coordinate is determined as the projected area of the obstacle on the road surface.
在子步骤S133中,将所述投影区域标注为所述道路路面上的障碍物区域。In sub-step S133, the projection area is marked as an obstacle area on the road surface.
可以将根据各个图像坐标的最外围区域所确定的所述障碍物在所述道路路面上的投影区域,通过圈选等方式自动地标注为所述道路路面上的障碍物区域。The projection area of the obstacle on the road surface determined according to the outermost area of each image coordinate can be automatically marked as the obstacle area on the road surface by means of circle selection or the like.
由此可见,采用根据本申请第一实施例的区域标注方法,可以在生成用于训练机器学习模型的训练样本的过程中,获取成像器件所采集的行驶环境的图像信息,获取与所述图像信息在时间上同步的所述行驶环境的深度信息,并且根据所述深度信息在所述图像信息中标注所述行驶环境中的障碍物区域。因此,与如现有技术中对障碍物区域进行人工标注的情况相比,可以自动地标注行驶环境中的障碍物区域,提高了区域标注的效率。It can be seen that, by using the region labeling method according to the first embodiment of the present application, the image information of the driving environment collected by the imaging device can be obtained during the process of generating the training samples for training the machine learning model, and the image information corresponding to the image can be obtained. The information is time-synchronized with depth information of the driving environment, and an obstacle area in the driving environment is marked in the image information according to the depth information. Therefore, compared with the situation of manually labeling the obstacle area as in the prior art, the obstacle area in the driving environment can be automatically labeled, which improves the efficiency of area labeling.
在上述的第一实施例中,可以在标注过程中,通过结合来自深度传感器的深度信息,自动在成像器件所采集的图像信息中将障碍物区域标注出来。然而,为了实现辅助驾驶等目的,不但要将障碍物区域标注出来,而且还希望将整个行驶环境中的可行驶区域都标注出来,并且基于上述标注结果来生成用于机器学习模型的训练样本。In the above-mentioned first embodiment, the obstacle area can be automatically marked in the image information collected by the imaging device by combining the depth information from the depth sensor during the marking process. However, in order to achieve assisted driving and other purposes, not only the obstacle area should be marked, but also the drivable area in the entire driving environment should be marked, and the training samples for the machine learning model should be generated based on the above marking results.
为了解决上述问题,在本申请的第一实施例的基础上提出了本申请的第二实施例。In order to solve the above problems, a second embodiment of the present application is proposed based on the first embodiment of the present application.
图5图示了根据本申请第二实施例的区域标注方法的流程图。FIG. 5 illustrates a flowchart of a region labeling method according to the second embodiment of the present application.
如图5所示,根据本申请第二实施例的区域标注方法可以包括:As shown in FIG. 5 , the area labeling method according to the second embodiment of the present application may include:
在图5中,采用了相同的附图标记来指示与图2相同的步骤。因此,图5中的步骤S110-S130与图2的步骤S110-S130相同,并可以参见上面结合图2到图4进行的描述。图5与图2的不同之处在于增加了步骤S140和进一步可选的步骤S150。In FIG. 5 , the same reference numerals are used to refer to the same steps as in FIG. 2 . Therefore, steps S110 - S130 in FIG. 5 are the same as steps S110 - S130 in FIG. 2 , and reference may be made to the above description in conjunction with FIGS. 2 to 4 . The difference between FIG. 5 and FIG. 2 is that step S140 and a further optional step S150 are added.
在步骤S140中,根据用户输入和所述障碍物区域来在所述图像信息中标注所述行驶环境中的可行驶区域。In step S140, a drivable area in the driving environment is marked in the image information according to the user input and the obstacle area.
当在所述图像信息中标注了道路路面上的障碍物区域之前、之后、或与之同时地,还可以通过各种方法来在图像信息中检测所述道路路面上的可行驶区域。Before, after, or at the same time as the obstacle area on the road surface is marked in the image information, various methods can also be used to detect the drivable area on the road surface in the image information.
图6图示了根据本申请实施例的标注可行驶区域步骤的流程图。FIG. 6 illustrates a flowchart of the steps of marking a drivable area according to an embodiment of the present application.
如图6所示,步骤S140可以包括:As shown in FIG. 6, step S140 may include:
在子步骤S141中,接收用户输入。In sub-step S141, user input is received.
该用户输入可以是用户基于人眼识别寻找到的道路路面的边界位置信息,其可以包括图像上的坐标输入或圈选输入等。The user input may be the boundary position information of the road surface found by the user based on human eye recognition, which may include coordinate input or circle selection input on the image, and the like.
在子步骤S142中,根据所述用户输入来确定所述道路路面的路面边界。In sub-step S142, a road surface boundary of the road surface is determined according to the user input.
例如,可以根据用户输入的边界位置信息来在图像信息中标注出所述道路路面的路面边界。例如,所述路面边界可以为以下各项中的至少一个:路沿、隔离带、绿化带、护栏、车道线、和其他车辆的边缘。For example, the road surface boundary of the road surface can be marked in the image information according to the boundary position information input by the user. For example, the pavement boundary may be at least one of the following: a curb, a divider, a green belt, a guardrail, a lane line, and the edge of other vehicles.
在子步骤S143中,根据所述路面边界和所述障碍物区域来标注所述道路路面上的可行驶区域。In sub-step S143, the drivable area on the road surface is marked according to the road surface boundary and the obstacle area.
例如,可以根据所述路面边界确定所述道路路面上的路面区域,并且从所述路面区域中去除所述障碍物区域,以获得所述可行驶区域。For example, a road surface area on the road surface may be determined from the road surface boundary, and the obstacle area may be removed from the road surface area to obtain the drivable area.
下面,将通过一个具体的实验来说明本申请实施例的效果。Next, the effects of the embodiments of the present application will be described through a specific experiment.
图7A图示了根据本申请实施例的在图1所示的图像信息中结合有深度信息和用户输入的示意图,图7B图示了根据本申请实施例的在图1所示的图像信息中标注有障碍物区域和可行驶区域的示意图。7A illustrates a schematic diagram of combining depth information and user input in the image information shown in FIG. 1 according to an embodiment of the present application, and FIG. 7B illustrates a schematic diagram of the image information shown in FIG. 1 according to an embodiment of the present application Schematic annotated with obstacle areas and drivable areas.
参考图7A,可以在标注过程中,获取时间同步的图像信息和激光传感器信息。在图像信息中,用户可以通过人眼辨识在图1所示的道路路面中标注出其中存在4条车道线(车道线1到4)和1条交界线(交界线1),作为路面边界的候选标识。这里,可以取决于不同的辅助驾驶策略来确定当前车辆可行驶的路面范围。例如,在车道线3和车道线4为实线时,在通常情况下,可以将它们作为路面边界来确定路面范围,但是在紧急情况下(如前方或后方出现可能碰撞的预警时),也可以将最大的物理可行驶范围,即道路边界1和道路边界5作为路面边界来确定路面范围。另外,如图7A所示,还可以基于激光点云等深度信息检测出在该道路路面中存在3个激光点簇(激光点簇1到3)。接下来,可以通过将激光点簇1到3的空间坐标进行转换并在图像信息中投射到道路路面的地平面上,获得障碍物1到3与地平面的交点。最后,可以将交点以上的最大轮廓区域,标注为障碍物区域,也即不可行驶区域,而剩余区域即可以标注为可行驶区域,如图7B所示,其中以将道路边界1和道路边界5作为路面边界为例进行了图示。Referring to FIG. 7A , during the labeling process, time-synchronized image information and laser sensor information can be acquired. In the image information, the user can mark the road surface shown in Figure 1 through human eye recognition that there are 4 lane lines (lane lines 1 to 4) and 1 boundary line (boundary line 1), which are used as the boundary of the road surface. Candidate ID. Here, the current range of road surfaces that the vehicle can travel on may be determined depending on different assisted driving strategies. For example, when lane line 3 and lane line 4 are solid lines, they can be used as road boundaries to determine the road surface range under normal circumstances, but in emergency situations (such as when there is a warning of possible collision ahead or behind), The road surface range can be determined by taking the maximum physically drivable range, namely, road boundary 1 and road boundary 5 as road boundary. In addition, as shown in FIG. 7A , it can also be detected that there are three laser point clusters (laser point clusters 1 to 3 ) on the road surface based on depth information such as a laser point cloud. Next, the intersections of obstacles 1 to 3 and the ground plane can be obtained by transforming the spatial coordinates of the laser point clusters 1 to 3 and projecting them on the ground plane of the road surface in the image information. Finally, the largest contour area above the intersection can be marked as an obstacle area, that is, a non-drivable area, and the remaining area can be marked as a drivable area, as shown in FIG. 7B , where road boundary 1 and road boundary 5 are marked as The illustration is shown as an example of a road surface boundary.
返回参考图5,接下来,可选地,在步骤S150中,基于其中标注有所述可行驶区域的图像信息来生成所述训练样本。Referring back to FIG. 5 , next, optionally, in step S150 , the training samples are generated based on image information in which the drivable area is marked.
例如,可以将图像信息和相关联的标注信息打包在一起,以生成训练样本,以供机器学习模型的后续训练使用。For example, image information and associated annotation information can be packaged together to generate training samples for subsequent training of machine learning models.
由此可见,采用根据本申请第二实施例的区域标注方法,可以根据深度传感器所采集的深度信息在成像器件所采集的图像信息中标注所述行驶环境中的障碍物区域,并且还可以根据用户输入在所述图像信息中标注行驶环境中的环境边界,根据所述环境边界和所述障碍物区域来确定所述行驶环境中的可行驶区域,并基于其中标注有所述可行驶区域的图像信息来生成所述训练样本。因此,能够可靠地且高效地检测行驶环境中的可行驶区域并生成供机器学习模型使用的训练样本。It can be seen that, by using the area labeling method according to the second embodiment of the present application, the obstacle area in the driving environment can be marked in the image information collected by the imaging device according to the depth information collected by the depth sensor, and the obstacle area in the driving environment can also be marked according to the depth information collected by the depth sensor. The user inputs an environmental boundary in the driving environment marked in the image information, determines a drivable area in the driving environment according to the environmental boundary and the obstacle area, and determines the drivable area in the driving environment based on the drivable area marked therein. image information to generate the training samples. Therefore, it is possible to reliably and efficiently detect drivable areas in the driving environment and generate training samples for use by the machine learning model.
示例性装置Exemplary device
下面,参考图8来描述根据本申请实施例的区域标注装置。Hereinafter, with reference to FIG. 8 , a region marking apparatus according to an embodiment of the present application will be described.
图8图示了根据本申请实施例的区域标注装置的框图。FIG. 8 illustrates a block diagram of a region labeling apparatus according to an embodiment of the present application.
如图8所示,所述区域标注装置100可以包括:图像获取单元110,用于在生成用于训练机器学习模型的训练样本的过程中,获取成像器件所采集的行驶环境的图像信息;深度获取单元120,用于获取与所述图像信息在时间上同步的所述行驶环境的深度信息;以及障碍标注单元130,用于根据所述深度信息在所述图像信息中标注所述行驶环境中的障碍物区域。As shown in FIG. 8 , the
在一个示例中,所述图像获取单元110可以获取当前车辆的行驶方向中的道路路面的图像信息。In one example, the
在一个示例中,所述深度获取单元120可以包括:时间确定模块,用于确定所述成像器件采集所述图像信息的采集时间;以及深度获取模块,用于获取所述当前车辆的深度传感器在所述采集时间所采集的所述行驶方向中的道路路面的深度信息。In one example, the
在一个示例中,所述障碍标注单元130可以包括:障碍判断模块,用于根据所述深度信息来判断在所述道路路面上是否存在障碍物;投影确定模块,用于响应于存在障碍物,根据所述障碍物的深度信息在所述图像信息中确定所述障碍物在所述道路路面上的投影区域;以及障碍标注模块,用于将所述投影区域标注为所述道路路面上的障碍物区域。In one example, the
在一个示例中,所述投影确定模块可以根据所述障碍物的深度信息和所述深度传感器的标定参数来确定所述障碍物相对于所述当前车辆的三维坐标;将所述障碍物的三维坐标中的高度坐标设置为零,以生成向所述道路路面上投影后的三维坐标;以及根据所述投影后的三维坐标和所述成像器件的标定参数来在所述图像信息中确定所述障碍物在所述道路路面上的投影区域。In one example, the projection determination module may determine the three-dimensional coordinates of the obstacle relative to the current vehicle according to the depth information of the obstacle and the calibration parameters of the depth sensor; The height coordinates in the coordinates are set to zero to generate three-dimensional coordinates projected onto the road surface; and the image information is determined according to the projected three-dimensional coordinates and calibration parameters of the imaging device. The projected area of the obstacle on the road surface.
在一个示例中,所述障碍物可以为以下各项中的至少一个:行人、动物、遗撒物、警示牌、隔离墩、和其他车辆。In one example, the obstacle may be at least one of: pedestrians, animals, litter, warning signs, barriers, and other vehicles.
在一个示例中,所述区域标注装置100还可以包括:可行驶标注单元(未示出),用于根据用户输入和所述障碍物区域来在所述图像信息中标注所述行驶环境中的可行驶区域。In one example, the
在一个示例中,所述可行驶标注单元可以包括:输入接收模块,用于接收用户输入;边界确定模块,用于根据所述用户输入来确定所述道路路面的路面边界;以及可行驶标注模块,用于根据所述路面边界和所述障碍物区域来标注所述道路路面上的可行驶区域。In one example, the drivable labeling unit may include: an input receiving module for receiving user input; a boundary determining module for determining a road surface boundary of the road surface according to the user input; and a drivable labeling module , which is used to mark the drivable area on the road surface according to the road surface boundary and the obstacle area.
在一个示例中,所述可行驶标注模块可以根据所述路面边界确定所述道路路面上的路面区域;以及从所述路面区域中去除所述障碍物区域,以获得所述可行驶区域。In one example, the drivable annotation module may determine a pavement area on the road pavement based on the pavement boundary; and remove the obstacle area from the pavement area to obtain the drivable area.
在一个示例中,所述区域标注装置100还可以包括:样本生成单元(未示出),用于基于其中标注有所述可行驶区域的图像信息来生成所述训练样本。In one example, the
上述区域标注装置100中的各个单元和模块的具体功能和操作已经在上面参考图1到图7B描述的区域标注方法中详细介绍,并因此,将省略其重复描述。The specific functions and operations of the respective units and modules in the above-mentioned
如上所述,本申请的实施例可以应用于对其上装备有成像器件的诸如交通工具、可移动机器人、固定监控摄像头之类的各种在线电子设备所处的行驶环境中的障碍物区域进行标注。并且,根据本申请实施例的区域标注方法和区域标注装置可以直接实现在上述在线电子设备上。但是,考虑到在线电子设备往往处理能力有限,所以为了获得更好的性能,也可以将本申请的实施例实现在能够与在线电子设备进行通信以向其传送训练好的机器学习模型的各种离线的电子设备中。例如,该离线的电子设备可以包括诸如终端设备、服务器等。As described above, the embodiments of the present application can be applied to the obstacle area in the driving environment where various online electronic devices such as vehicles, mobile robots, fixed surveillance cameras and the like equipped with imaging devices are located. callout. Moreover, the area marking method and the area marking apparatus according to the embodiments of the present application may be directly implemented on the above-mentioned online electronic device. However, considering that online electronic devices often have limited processing capabilities, in order to obtain better performance, the embodiments of the present application can also be implemented in various devices that can communicate with online electronic devices to transmit trained machine learning models to them. off-line electronic devices. For example, the offline electronic device may include, for example, a terminal device, a server, and the like.
相应地,根据本申请实施例的区域标注装置100可以作为一个软件模块和/或硬件模块而集成到该离线的电子设备中,换言之,该电子设备可以包括该区域标注装置100。例如,该区域标注装置100可以是该电子设备的操作系统中的一个软件模块,或者可以是针对于该电子设备所开发的一个应用程序;当然,该区域标注装置100同样可以是该电子设备的众多硬件模块之一。Correspondingly, the
替换地,在另一示例中,该区域标注装置100与该离线的电子设备也可以是分立的设备,并且该区域标注装置100可以通过有线和/或无线网络连接到该电子设备,并且按照约定的数据格式来传输交互信息。Alternatively, in another example, the
示例性电子设备Exemplary Electronics
下面,参考图9来描述根据本申请实施例的电子设备。该电子设备可以是其上装备有成像器件的诸如交通工具、可移动机器人之类的在线电子设备,也可以是能够与在线电子设备进行通信以向其传送训练好的机器学习模型的离线的电子设备。Hereinafter, an electronic device according to an embodiment of the present application will be described with reference to FIG. 9 . The electronic device may be an online electronic device such as a vehicle, a mobile robot, etc., on which an imaging device is equipped, or an offline electronic device capable of communicating with the online electronic device to transmit the trained machine learning model thereto. equipment.
图9图示了根据本申请实施例的电子设备的框图。FIG. 9 illustrates a block diagram of an electronic device according to an embodiment of the present application.
如图9所示,电子设备10包括一个或多个处理器11和存储器12。As shown in FIG. 9 , the electronic device 10 includes one or
处理器11可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备10中的其他组件以执行期望的功能。
存储器12可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器11可以运行所述程序指令,以实现上文所述的本申请的各个实施例的区域标注方法以及/或者其他期望的功能。在所述计算机可读存储介质中还可以存储诸如图像信息、深度信息、障碍物区域、可行驶区域、标注信息等各种内容。
在一个示例中,电子设备10还可以包括:输入装置13和输出装置14,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。应当注意,图9所示的电子设备10的组件和结构只是示例性的、而非限制性的,根据需要,电子设备10也可以具有其他组件和结构。In one example, the electronic device 10 may also include an
例如,该输入装置13可以是成像器件,用于采集图像信息,所采集的图像信息可以被存储在存储器12中以供其他组件使用。当然,也可以利用其他集成或分立的成像器件来采集该图像帧序列,并且将它发送到电子设备10。又如,该输入装置13也可以是深度传感器,用于采集深度信息,所采集的深度信息也可以被存储在存储器12中。此外,该输入设备13还可以包括例如键盘、鼠标、以及通信网络及其所连接的远程输入设备等等。For example, the
输出装置14可以向外部(例如,用户或机器学习模型)输出各种信息,包括确定出的行驶环境的障碍物区域、可行驶区域、训练样本等。该输出设备14可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。The
当然,为了简化,图9中仅示出了该电子设备10中与本申请有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备10还可以包括任何其他适当的组件。Of course, for simplicity, only some of the components in the electronic device 10 related to the present application are shown in FIG. 9 , and components such as buses, input/output interfaces and the like are omitted. Besides, the electronic device 10 may also include any other suitable components according to the specific application.
示例性计算机程序产品和计算机可读存储介质Exemplary computer program product and computer readable storage medium
除了上述方法和设备以外,本申请的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的区域标注方法中的步骤。In addition to the methods and apparatuses described above, embodiments of the present application may also be computer program products comprising computer program instructions that, when executed by a processor, cause the processor to perform the "exemplary methods" described above in this specification The steps in the region labeling method according to various embodiments of the present application described in the section.
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本申请实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。The computer program product can write program codes for performing the operations of the embodiments of the present application in any combination of one or more programming languages, including object-oriented programming languages, such as Java, C++, etc. , also includes conventional procedural programming languages, such as "C" language or similar programming languages. The program code may execute entirely on the user computing device, partly on the user device, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
此外,本申请的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本申请各种实施例的区域标注方法中的步骤。In addition, embodiments of the present application may also be computer-readable storage media having computer program instructions stored thereon, the computer program instructions, when executed by a processor, cause the processor to perform the above-mentioned "Example Method" section of this specification Steps in the region labeling method according to various embodiments of the present application described in .
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The computer-readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses or devices, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
以上结合具体实施例描述了本申请的基本原理,但是,需要指出的是,在本申请中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本申请的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本申请为必须采用上述具体的细节来实现。The basic principles of the present application have been described above in conjunction with specific embodiments. However, it should be pointed out that the advantages, advantages, effects, etc. mentioned in the present application are only examples rather than limitations, and these advantages, advantages, effects, etc., are not considered to be Required for each embodiment of this application. In addition, the specific details disclosed above are only for the purpose of example and easy understanding, rather than limiting, and the above-mentioned details do not limit the application to be implemented by using the above-mentioned specific details.
本申请中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。The block diagrams of devices, apparatus, apparatuses, and systems referred to in this application are merely illustrative examples and are not intended to require or imply that the connections, arrangements, or configurations must be in the manner shown in the block diagrams. As those skilled in the art will appreciate, these means, apparatuses, apparatuses, systems may be connected, arranged, configured in any manner. Words such as "including", "including", "having" and the like are open-ended words meaning "including but not limited to" and are used interchangeably therewith. As used herein, the words "or" and "and" refer to and are used interchangeably with the word "and/or" unless the context clearly dictates otherwise. As used herein, the word "such as" refers to and is used interchangeably with the phrase "such as but not limited to".
还需要指出的是,在本申请的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本申请的等效方案。It should also be pointed out that in the apparatus, equipment and method of the present application, each component or each step can be decomposed and/or recombined. These disaggregations and/or recombinations should be considered as equivalents of the present application.
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本申请。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本申请的范围。因此,本申请不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。The above description of the disclosed aspects is provided to enable any person skilled in the art to make or use this application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Therefore, this application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本申请的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。The foregoing description has been presented for the purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the forms disclosed herein. Although a number of example aspects and embodiments have been discussed above, those skilled in the art will recognize certain variations, modifications, changes, additions and sub-combinations thereof.
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610921206.7A CN106503653B (en) | 2016-10-21 | 2016-10-21 | Region labeling method and device and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610921206.7A CN106503653B (en) | 2016-10-21 | 2016-10-21 | Region labeling method and device and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN106503653A CN106503653A (en) | 2017-03-15 |
| CN106503653B true CN106503653B (en) | 2020-10-13 |
Family
ID=58318354
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610921206.7A Active CN106503653B (en) | 2016-10-21 | 2016-10-21 | Region labeling method and device and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106503653B (en) |
Families Citing this family (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108168566B (en) * | 2016-12-07 | 2020-09-04 | 北京三快在线科技有限公司 | Road determination method and device and electronic equipment |
| CN106952308B (en) * | 2017-04-01 | 2020-02-28 | 上海蔚来汽车有限公司 | Method and system for determining position of moving object |
| CN107437268A (en) * | 2017-07-31 | 2017-12-05 | 广东欧珀移动通信有限公司 | Photographing method, device, mobile terminal and computer storage medium |
| CN107907886A (en) * | 2017-11-07 | 2018-04-13 | 广东欧珀移动通信有限公司 | Driving condition recognition method, device, storage medium and terminal equipment |
| CN108256413B (en) * | 2017-11-27 | 2022-02-25 | 科大讯飞股份有限公司 | Passable area detection method and device, storage medium and electronic equipment |
| CN108563742B (en) * | 2018-04-12 | 2022-02-01 | 王海军 | Method for automatically creating artificial intelligence image recognition training material and labeled file |
| US10816984B2 (en) * | 2018-04-13 | 2020-10-27 | Baidu Usa Llc | Automatic data labelling for autonomous driving vehicles |
| CN108827309B (en) * | 2018-06-29 | 2021-08-17 | 炬大科技有限公司 | Robot path planning method and dust collector with same |
| CN109271944B (en) * | 2018-09-27 | 2021-03-12 | 百度在线网络技术(北京)有限公司 | Obstacle detection method, obstacle detection device, electronic apparatus, vehicle, and storage medium |
| DK180774B1 (en) | 2018-10-29 | 2022-03-04 | Motional Ad Llc | Automatic annotation of environmental features in a map during navigation of a vehicle |
| CN109376664B (en) * | 2018-10-29 | 2021-03-09 | 百度在线网络技术(北京)有限公司 | Machine learning training method, device, server and medium |
| CN111323026B (en) * | 2018-12-17 | 2023-07-07 | 兰州大学 | A Ground Filtering Method Based on High Precision Point Cloud Map |
| CN109683613B (en) * | 2018-12-24 | 2022-04-29 | 驭势(上海)汽车科技有限公司 | Method and device for determining auxiliary control information of vehicle |
| CN109765634B (en) * | 2019-01-18 | 2021-09-17 | 广州市盛光微电子有限公司 | Depth marking device |
| CN110032181B (en) * | 2019-02-26 | 2022-05-17 | 文远知行有限公司 | Obstacle locating method, device, computer equipment and storage medium in semantic map |
| CN111696144B (en) * | 2019-03-11 | 2024-06-25 | 北京地平线机器人技术研发有限公司 | Depth information determining method, depth information determining device and electronic equipment |
| CN110096059B (en) * | 2019-04-25 | 2022-03-01 | 杭州飞步科技有限公司 | Automatic driving method, device, equipment and storage medium |
| CN110197148B (en) * | 2019-05-23 | 2020-12-01 | 北京三快在线科技有限公司 | Target object labeling method and device, electronic equipment and storage medium |
| CN111027381A (en) * | 2019-11-06 | 2020-04-17 | 杭州飞步科技有限公司 | Method, device, device and storage medium for identifying obstacles using monocular camera |
| CN110866504B (en) * | 2019-11-20 | 2023-10-17 | 北京百度网讯科技有限公司 | Methods, devices and equipment for obtaining annotated data |
| CN111125442B (en) * | 2019-12-11 | 2022-11-15 | 苏州智加科技有限公司 | Data labeling method and device |
| CN111368794B (en) * | 2020-03-19 | 2023-09-19 | 北京百度网讯科技有限公司 | Obstacle detection methods, devices, equipment and media |
| CN112639822B (en) * | 2020-03-27 | 2021-11-30 | 华为技术有限公司 | Data processing method and device |
| CN111552289B (en) * | 2020-04-28 | 2021-07-06 | 苏州高之仙自动化科技有限公司 | Detection method and virtual radar device, electronic equipment, storage medium |
| CN112200049B (en) * | 2020-09-30 | 2023-03-31 | 华人运通(上海)云计算科技有限公司 | Method, device and equipment for marking road surface topography data and storage medium |
| CN112714266B (en) * | 2020-12-18 | 2023-03-31 | 北京百度网讯科技有限公司 | Method and device for displaying labeling information, electronic equipment and storage medium |
| CN114675645A (en) * | 2022-03-23 | 2022-06-28 | 江苏眸视机器人科技有限公司 | A control method, system, storage medium and electronic device |
| CN115164910B (en) * | 2022-06-22 | 2023-02-21 | 小米汽车科技有限公司 | Driving route generation method, device, vehicle, storage medium and chip |
| CN116434166A (en) * | 2023-03-14 | 2023-07-14 | 北京百度网讯科技有限公司 | Target area identification method, device, equipment and storage medium |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101428403B1 (en) * | 2013-07-17 | 2014-08-07 | 현대자동차주식회사 | Apparatus and method for detecting obstacle in front |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4406381B2 (en) * | 2004-07-13 | 2010-01-27 | 株式会社東芝 | Obstacle detection apparatus and method |
| JP6262068B2 (en) * | 2014-04-25 | 2018-01-17 | 日立建機株式会社 | Near-body obstacle notification system |
| CN108594851A (en) * | 2015-10-22 | 2018-09-28 | 飞智控(天津)科技有限公司 | A kind of autonomous obstacle detection system of unmanned plane based on binocular vision, method and unmanned plane |
| CN105319991B (en) * | 2015-11-25 | 2018-08-28 | 哈尔滨工业大学 | A kind of robot environment's identification and job control method based on Kinect visual informations |
| CN105957145A (en) * | 2016-04-29 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Road barrier identification method and device |
-
2016
- 2016-10-21 CN CN201610921206.7A patent/CN106503653B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101428403B1 (en) * | 2013-07-17 | 2014-08-07 | 현대자동차주식회사 | Apparatus and method for detecting obstacle in front |
Non-Patent Citations (1)
| Title |
|---|
| 辅助驾驶中的路面障碍检测技术研究;赵日成;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;第2016年卷(第03期);第I138-6885页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106503653A (en) | 2017-03-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106503653B (en) | Region labeling method and device and electronic equipment | |
| CN106485233B (en) | Driving area detection method, device and electronic device | |
| US12228928B1 (en) | System and method for evaluating the perception system of an autonomous vehicle | |
| CN111971725B (en) | Method for determining lane change instructions of a vehicle, readable storage medium and vehicle | |
| CN108444390B (en) | Unmanned automobile obstacle identification method and device | |
| JP2020064046A (en) | Vehicle position determining method and vehicle position determining device | |
| US10369993B2 (en) | Method and device for monitoring a setpoint trajectory to be traveled by a vehicle for being collision free | |
| JP7461399B2 (en) | Method and device for assisting the running operation of a motor vehicle, and motor vehicle | |
| CN110936959B (en) | On-line diagnosis and prediction of vehicle perception system | |
| WO2019198076A1 (en) | Real-time raw data- and sensor fusion | |
| CN112699748B (en) | Estimation method of distance between people and vehicles based on YOLO and RGB images | |
| CN110341621B (en) | Obstacle detection method and device | |
| CN114091513A (en) | Situation sensing method and system for ground unmanned platform-oriented auxiliary remote control driving | |
| CN113988197A (en) | Multi-camera and multi-laser radar based combined calibration and target fusion detection method | |
| CN105303887B (en) | Method and apparatus for monitoring a desired trajectory of a vehicle | |
| CN118928462B (en) | Multi-dimensional perception automatic driving avoidance method and device | |
| Petrovai et al. | A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices | |
| US12525003B2 (en) | Method and system for determining ground level using an artificial neural network | |
| US12561989B2 (en) | Vehicle localization based on lane templates | |
| KR20160125803A (en) | Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest | |
| CN114120269A (en) | Rail clearance detection method and system | |
| CN115402347A (en) | Method for recognizing drivable area of vehicle and method for assisting driving | |
| CN114509772B (en) | Remote distance estimation using reference objects | |
| Wu et al. | Enhancing Safe Driving of Autonomous Buses in Obstructed-View Conditions Using Distributed Monocular Cameras | |
| CN115953754A (en) | Driving test processing method, device and vehicle |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |