[go: up one dir, main page]

CN108803617A - Trajectory predictions method and device - Google Patents

Trajectory predictions method and device Download PDF

Info

Publication number
CN108803617A
CN108803617A CN201810752554.5A CN201810752554A CN108803617A CN 108803617 A CN108803617 A CN 108803617A CN 201810752554 A CN201810752554 A CN 201810752554A CN 108803617 A CN108803617 A CN 108803617A
Authority
CN
China
Prior art keywords
trajectory
information
vehicle
video sequence
surrounding vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810752554.5A
Other languages
Chinese (zh)
Other versions
CN108803617B (en
Inventor
邹文斌
周长源
吴迪
王振楠
唐毅
李霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201810752554.5A priority Critical patent/CN108803617B/en
Publication of CN108803617A publication Critical patent/CN108803617A/en
Application granted granted Critical
Publication of CN108803617B publication Critical patent/CN108803617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the present invention provides a kind of trajectory predictions method and device, is related to the Local Navigation field of robot and intelligent vehicle, applied to the vehicle for being provided with vehicle-mounted camera, this method includes:It is photographed to ambient enviroment using vehicle-mounted camera, acquisition includes the video sequence of surrounding vehicles and vehicle context.The surrounding vehicles are positioned from the video sequence and extract the historical track information of the surrounding vehicles, and the Scene Semantics information that video sequence progress image segmentation is obtained is as auxiliary information.The historical track information and the auxiliary information are inputted into neural network model, obtain the prediction locus of the surrounding vehicles.The accuracy of prediction track of vehicle can be improved in the trajectory predictions method.

Description

轨迹预测方法及装置Trajectory prediction method and device

技术领域technical field

本发明涉及机器人及智能车辆的局部导航领域,尤其涉及一种轨迹预测方法及装置。The invention relates to the field of local navigation of robots and intelligent vehicles, in particular to a trajectory prediction method and device.

背景技术Background technique

在车辆行驶过程中,预测其他交通参与者的未来轨迹以避免自动驾驶的车辆撞向其他车辆是十分重要的。假设所有交通参与者都遵守交通规则,人类驾驶员可以潜意识地预测目标的未来轨迹,则对于自动驾驶车辆而言,通常采用建立模型的方法来预测其他交通参与者的未来轨迹。During vehicle driving, it is very important to predict the future trajectories of other traffic participants to avoid the autonomous vehicle from crashing into other vehicles. Assuming that all traffic participants obey the traffic rules and human drivers can subconsciously predict the future trajectory of the target, for autonomous vehicles, the method of building a model is usually used to predict the future trajectory of other traffic participants.

然而,目前大多数工作都是使用静态图像来提取视觉语义消息,或者采用端对端的结构来学习驾驶网络,前者忽略驾驶情形中的时间连续性,而后者缺乏训练网络可解释性,因此会造成预测车辆轨迹准确度不高的问题。However, most current works either use static images to extract visual-semantic messages, or adopt an end-to-end structure to learn driving networks, the former ignores temporal continuity in driving situations, and the latter lacks the interpretability of training networks, thus causing The problem of low accuracy in predicting vehicle trajectories.

发明内容Contents of the invention

本发明的主要目的在于提供一种轨迹预测方法及装置,可提高预测车辆轨迹的准确度。The main purpose of the present invention is to provide a trajectory prediction method and device, which can improve the accuracy of predicting vehicle trajectory.

本发明实施例第一方面提供的轨迹预测方法,应用于设置有车载摄像头的车辆,所述方法包括:利用车载摄像头对周围环境进行摄影,获取包括有周围车辆和车辆背景的视频序列;从所述视频序列中定位所述周围车辆并提取所述周围车辆的历史轨迹信息,将所述视频序列进行图像分割得到的场景语义信息作为辅助信息;将所述历史轨迹信息和所述辅助信息输入神经网络模型,得到所述周围车辆的预测轨迹。The trajectory prediction method provided in the first aspect of the embodiment of the present invention is applied to a vehicle equipped with a vehicle-mounted camera, and the method includes: using the vehicle-mounted camera to take pictures of the surrounding environment, and acquiring a video sequence including surrounding vehicles and vehicle backgrounds; Position the surrounding vehicles in the video sequence and extract the historical trajectory information of the surrounding vehicles, and use the scene semantic information obtained by image segmentation of the video sequence as auxiliary information; input the historical trajectory information and the auxiliary information into the neural network network model to obtain the predicted trajectory of the surrounding vehicles.

本发明实施例第二方面提供的轨迹预测装置,应用于设置有车载摄像头的车辆,所述装置包括:获取模块,用于利用车载摄像头对周围环境进行摄影,获取包括有周围车辆和车辆背景的视频序列;提取分割模块,用于从所述视频序列中定位所述周围车辆并提取所述周围车辆的历史轨迹信息,将所述视频序列进行图像分割得到的场景语义信息作为辅助信息;输出模块,用于将所述历史轨迹信息和所述辅助信息输入神经网络模型,得到所述周围车辆的预测轨迹。The trajectory prediction device provided in the second aspect of the embodiment of the present invention is applied to a vehicle equipped with a vehicle-mounted camera, and the device includes: an acquisition module, configured to use the vehicle-mounted camera to take pictures of the surrounding environment, and acquire information including surrounding vehicles and vehicle backgrounds Video sequence; extraction and segmentation module, used to locate the surrounding vehicles from the video sequence and extract the historical track information of the surrounding vehicles, and use the scene semantic information obtained by image segmentation of the video sequence as auxiliary information; output module , for inputting the historical trajectory information and the auxiliary information into the neural network model to obtain the predicted trajectory of the surrounding vehicles.

从上述实施例中可知,通过车载摄像头获取包括周围车辆和车辆背景的视频序列,并且将视频序列进行图像分割获取场景语义信息,接着将场景语义信息和历史轨迹信息输入神经网络模型获取预测轨迹,而不是采用静态图像来提取场景语义信息进行分析,从而保证了本实施例中神经网络模型的时间连续性,进而提高了预测车辆轨迹的准确度。As can be seen from the above-mentioned embodiments, the video sequence including the surrounding vehicles and the vehicle background is obtained through the vehicle-mounted camera, and the video sequence is image-segmented to obtain scene semantic information, and then the scene semantic information and historical trajectory information are input into the neural network model to obtain the predicted trajectory. Rather than using static images to extract scene semantic information for analysis, the time continuity of the neural network model in this embodiment is ensured, thereby improving the accuracy of predicting vehicle trajectories.

附图说明Description of drawings

图1是本发明第一实施例提供的轨迹预测方法的实现流程示意图;FIG. 1 is a schematic diagram of the implementation flow of the trajectory prediction method provided by the first embodiment of the present invention;

图2是本发明第二实施例提供的轨迹预测方法的实现流程示意图;Fig. 2 is a schematic diagram of the implementation flow of the trajectory prediction method provided by the second embodiment of the present invention;

图3是本发明第二实施例提供的轨迹预测方法的神经网络模型的示意图;Fig. 3 is a schematic diagram of the neural network model of the trajectory prediction method provided by the second embodiment of the present invention;

图4是本发明第二实施例提供的轨迹预测方法的应用示意图;Fig. 4 is a schematic diagram of the application of the trajectory prediction method provided by the second embodiment of the present invention;

图5是本发明第三实施例提供的轨迹预测装置的结构示意图。Fig. 5 is a schematic structural diagram of a trajectory prediction device provided by a third embodiment of the present invention.

具体实施方式Detailed ways

为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described The embodiments are only some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without making creative efforts belong to the protection scope of the present invention.

请参阅图1,图1是本发明第一实施例提供的轨迹预测方法的实现流程示意图,该方法应用于设置有车载摄像头的车辆。如图1所示,该轨迹预测方法主要包括以下步骤:Please refer to FIG. 1 . FIG. 1 is a schematic diagram of the implementation flow of the trajectory prediction method provided by the first embodiment of the present invention, and the method is applied to a vehicle equipped with a vehicle-mounted camera. As shown in Figure 1, the trajectory prediction method mainly includes the following steps:

101、利用车载摄像头对周围环境进行摄影,获取包括有周围车辆和车辆背景的视频序列。101. Use the on-board camera to take pictures of the surrounding environment, and acquire a video sequence including the surrounding vehicles and the vehicle background.

具体的,在车辆的自动行驶过程中,假设所有交通参与者都遵守交通规则,采用建立模型的方法来预测其他交通参与者的未来轨迹。在建立模型的过程中,需获取周围的环境信息,因此首先利用车辆上的车载摄像头对周围环境进行摄影,获取包括有周围车辆和车辆背景的视频序列。其中,视频序列的每秒帧数可根据实际情况进行选用。其中,周围车辆可指与设置有车载摄像头的车辆的距离位于一定范围以内,对设置有车载摄像头的车辆存在潜在影响的车辆,该范围可为设置有车载摄像头的车辆的周围30米处。Specifically, in the automatic driving process of the vehicle, it is assumed that all traffic participants obey the traffic rules, and the method of building a model is used to predict the future trajectories of other traffic participants. In the process of building the model, it is necessary to obtain the surrounding environment information, so first use the on-board camera on the vehicle to take pictures of the surrounding environment, and obtain a video sequence including the surrounding vehicles and the vehicle background. Wherein, the number of frames per second of the video sequence may be selected according to actual conditions. Wherein, the surrounding vehicles may refer to vehicles that are within a certain distance from the vehicle equipped with the vehicle camera and have potential influence on the vehicle equipped with the vehicle camera, and the range may be 30 meters around the vehicle equipped with the vehicle camera.

102、从该视频序列中定位该周围车辆并提取该周围车辆的历史轨迹信息,将该视频序列进行图像分割得到的场景语义信息作为辅助信息。102. Locate the surrounding vehicle from the video sequence, extract historical track information of the surrounding vehicle, and use scene semantic information obtained by image segmentation of the video sequence as auxiliary information.

具体的,视频序列中的运动为快速连续地显示帧所形成的运动的假象,每一帧的视频序列为静止的图像,则在每一帧的视频序列中定位周围车辆,从连续的多帧视频序列中可以看到周围车辆的轨迹信息,因此对于当前帧的视频序列而言,从过去多帧的视频序列中获取到的是周围车辆的历史轨迹信息。Specifically, the motion in the video sequence is the illusion of motion formed by displaying frames in rapid succession, and the video sequence of each frame is a still image, so the surrounding vehicles are positioned in the video sequence of each frame, and the continuous multi-frame The trajectory information of the surrounding vehicles can be seen in the video sequence, so for the video sequence of the current frame, the historical trajectory information of the surrounding vehicles is obtained from the video sequence of multiple frames in the past.

其中,将每一帧的视频序列进行图像分割得到的场景语义信息作为辅助信息。图像分割是指将每一帧的视频序列中的物体按照语义类别进行分割并标注场景语义信息,如行人、周围车辆、建筑物、天空、植被、道路障碍、车道线、道路标识信息和交通信号灯信息等,进而识别当前帧的视频序列中的可行驶区域。通过将场景语义信息作为辅助信息可对于目标的表观变化具有一定的鲁棒性。Wherein, the scene semantic information obtained by image segmentation of the video sequence of each frame is used as auxiliary information. Image segmentation refers to segmenting objects in each frame of the video sequence according to semantic categories and labeling scene semantic information, such as pedestrians, surrounding vehicles, buildings, sky, vegetation, road obstacles, lane lines, road sign information and traffic lights Information, etc., and then identify the drivable area in the video sequence of the current frame. By using the scene semantic information as auxiliary information, it can be robust to the apparent changes of the target.

可选的,由于不同的语义类别所对应的区域为不同特征区域,而不同特征区域的分界线是边缘,因此可采用边缘检测对每一帧的视频序列进行分割,从而提取出所需要的目标。其中,边缘是表明一个特征区域的结束和另一个特征区域的开始,所需要的目标的内部特征或属性是一致的,与其他特征区域内部的特征或属性不一致,如灰度、颜色或者纹理等特征。Optionally, since the areas corresponding to different semantic categories are different feature areas, and the boundaries between different feature areas are edges, edge detection can be used to segment the video sequence of each frame, so as to extract the desired target. Among them, the edge indicates the end of one feature area and the beginning of another feature area. The internal features or attributes of the required target are consistent and inconsistent with the internal features or attributes of other feature areas, such as grayscale, color, or texture. feature.

103、将该历史轨迹信息和该辅助信息输入神经网络模型,得到该周围车辆的预测轨迹。103. Input the historical trajectory information and the auxiliary information into the neural network model to obtain the predicted trajectory of the surrounding vehicles.

具体的,神经网络是由大量且简单的神经元广泛地互相连接而形成的复杂网络系统,是一个高度复杂的非线性动力学习系统,具有大规模并行、分布式存储和处理、自组织、自适应和自学能力。因此,利用神经网络建立数学模型得到神经网络模型,并将得到的历史轨迹信息和辅助信息输入神经网络模型,得到周围车辆的预测轨迹。Specifically, the neural network is a complex network system formed by a large number of simple neurons widely interconnected. It is a highly complex nonlinear dynamic learning system with large-scale parallelism, distributed storage and processing, self-organization, self-organization Adaptability and self-learning ability. Therefore, the neural network is used to establish a mathematical model to obtain the neural network model, and the obtained historical trajectory information and auxiliary information are input into the neural network model to obtain the predicted trajectory of the surrounding vehicles.

在本发明实施例中,通过车载摄像头获取包括周围车辆和车辆背景的视频序列,并且将视频序列进行图像分割获取场景语义信息,接着将场景语义信息和历史轨迹信息输入神经网络模型获取预测轨迹,而不是采用静态图像来提取场景语义信息进行分析,从而保证了本实施例中神经网络模型的时间连续性,进而提高了预测车辆轨迹的准确度。In the embodiment of the present invention, the video sequence including the surrounding vehicles and the vehicle background is obtained through the vehicle-mounted camera, and the video sequence is image-segmented to obtain scene semantic information, and then the scene semantic information and historical trajectory information are input into the neural network model to obtain the predicted trajectory, Rather than using static images to extract scene semantic information for analysis, the time continuity of the neural network model in this embodiment is ensured, thereby improving the accuracy of predicting vehicle trajectories.

请参阅图2,图2是本发明第二实施例提供的轨迹预测方法的实现流程示意图,该方法应用于设置有车载摄像头的车辆。如图2所示,该轨迹预测方法主要包括以下步骤:Please refer to FIG. 2 . FIG. 2 is a schematic diagram of the implementation flow of the trajectory prediction method provided by the second embodiment of the present invention, and the method is applied to a vehicle equipped with a vehicle-mounted camera. As shown in Figure 2, the trajectory prediction method mainly includes the following steps:

201、利用车载摄像头对周围环境进行摄影,获取包括有周围车辆和车辆背景的视频序列。201. Use the on-board camera to take pictures of the surrounding environment, and acquire a video sequence including surrounding vehicles and vehicle backgrounds.

202、从该视频序列中定位该周围车辆并提取该周围车辆的历史轨迹信息,将该视频序列进行图像分割得到的场景语义信息作为辅助信息。202. Locate the surrounding vehicle from the video sequence, extract historical track information of the surrounding vehicle, and use scene semantic information obtained by image segmentation of the video sequence as auxiliary information.

203、将该辅助信息输入给该卷积神经网络,得到空间特征信息。203. Input the auxiliary information to the convolutional neural network to obtain spatial feature information.

具体的,该神经网络模型包括卷积神经网络、第一层长短期记忆网络、第二层长短期记忆网络和全连接层。Specifically, the neural network model includes a convolutional neural network, a first-layer long-short-term memory network, a second-layer long-short-term memory network, and a fully connected layer.

其中,卷积神经网络是一种前馈神经网络。视频序列进行图像分割和标注后得到场景语义信息作为辅助信息输入到卷积神经网络,得到空间特征信息。辅助信息为图像信息,可采用一位有效编码进行编码,以频道数作为语义类别数量,将该辅助信息输入四层的卷积神经网络,该卷积核可为3*3*4,得到空间特征信息,该空间特征信息以6维向量表示。Among them, convolutional neural network is a kind of feedforward neural network. After the video sequence is segmented and labeled, the scene semantic information is obtained as auxiliary information and input to the convolutional neural network to obtain spatial feature information. Auxiliary information is image information, which can be encoded with one-bit effective code, and the number of channels is used as the number of semantic categories, and the auxiliary information is input into a four-layer convolutional neural network. The convolution kernel can be 3*3*4, and the spatial Feature information, the spatial feature information is represented by a 6-dimensional vector.

其中,如图3所示,卷积神经网络包括卷积层、线性修正单元、池化层和Dropout层。卷积层可提取辅助信息中的特征。线性层可引入非线性特征。池化层可对输入的辅助信息进行压缩,并提取主要特征。Dropout层可用于缓解过拟合问题。Among them, as shown in Figure 3, the convolutional neural network includes a convolutional layer, a linear correction unit, a pooling layer, and a dropout layer. Convolutional layers extract features from side information. Linear layers can introduce nonlinear features. The pooling layer can compress the input auxiliary information and extract the main features. Dropout layers can be used to alleviate the overfitting problem.

204、将该历史轨迹信息输入该第一层长短期记忆网络,得到时间特征信息。将该空间特征信息和该时间特征信息输入该第二层长短期记忆网络,得到联合特征信息。204. Input the historical trajectory information into the first-layer long-short-term memory network to obtain time feature information. Inputting the spatial feature information and the time feature information into the second-layer long-short-term memory network to obtain joint feature information.

具体的,长短期神经(Long-short Term Memory,LSTM)网络是一种时间递归网络。历史轨迹信息有一定的时序性,并且位置上存在一定上下文关联,即历史轨迹信息作为一个序列的输入需要不断进行学习前后的位置特征,因此利用LSTM网络对历史轨迹信息进行训练,并连接历史帧的轨迹信息用于推测当前帧的轨迹信息。Specifically, the Long-short Term Memory (LSTM) network is a time recurrent network. The historical trajectory information has a certain timing, and there is a certain contextual relationship in the position, that is, the historical trajectory information as a sequence input needs to continuously learn the location characteristics before and after, so the LSTM network is used to train the historical trajectory information and connect the historical frames. The trajectory information of is used to infer the trajectory information of the current frame.

其中,如图3所示,历史轨迹信息输入第一层LSTM网络,得到时间特征信息,将该时间特征信息和步骤203中得到的空间特征信息输入到第二层LSTM网络,得到联合表征信息。并且由于立体空间网格维数为6,第一层LSTM网络不仅可以学习到时间特征信息,而且可以使时间特征信息和空间特征信息的维数一致。在实际应用中,第一层LSTM网络的单元数可为100,第二层LSTM网络可包括两层单元数均为300的LSTM网络。Wherein, as shown in FIG. 3 , the historical trajectory information is input into the first-layer LSTM network to obtain temporal feature information, and the temporal feature information and the spatial feature information obtained in step 203 are input to the second-layer LSTM network to obtain joint representation information. And because the dimension of the three-dimensional space grid is 6, the first layer LSTM network can not only learn the temporal feature information, but also make the dimensions of the temporal feature information and spatial feature information consistent. In practical applications, the number of units in the first layer LSTM network may be 100, and the second layer LSTM network may include LSTM networks with 300 units in both layers.

205、将该联合特征信息输入全连接层,得到该预测轨迹。205. Input the joint feature information into the fully connected layer to obtain the predicted trajectory.

具体的,全连接层的每一个结点都与上一层的所有结点相连,用来把上一层提取到的所有特征综合起来,因此将联合表征信息输入全连接层,进行一系列的矩阵相乘得到神经网络模型的输出,得到T个时间步长的预测轨迹J。在实际应用中,时间T可为1.6s(单位:秒)Specifically, each node of the fully connected layer is connected to all the nodes of the previous layer to combine all the features extracted by the previous layer, so the joint representation information is input into the fully connected layer for a series of The output of the neural network model is obtained by matrix multiplication, and the predicted trajectory J of T time steps is obtained. In practical applications, the time T can be 1.6s (unit: second)

其中,该神经网络模型包括以下公式:Wherein, the neural network model includes the following formula:

J←Mp(h,a):H×A。J←M p (h,a):H×A.

其中,J表示该预测轨迹,M表示H、A与J之间的映射关系,H表示该历史轨迹信息,A表示该辅助信息,p表示该周围车辆,h表示在第t帧视频序列中车辆p的位置信息,a表示在第t帧视频序列中车辆p的场景语义信息,j表示在从T+1帧起第t帧视频序列中车辆p的位置信息,t表示每帧。Among them, J represents the predicted trajectory, M represents the mapping relationship between H, A and J, H represents the historical trajectory information, A represents the auxiliary information, p represents the surrounding vehicles, h represents the vehicle in the t-th frame video sequence The position information of p, a represents the scene semantic information of vehicle p in the t-th frame video sequence, j represents the position information of vehicle p in the t-th frame video sequence from T+1 frame, and t represents each frame.

其中,如图3所示,本实施例中提出图像分割-长短期记忆网络(Segmentation-Long-short Term Memory,SEG-LSTM)将历史帧的多流融合起来并预测周围车辆的未来轨迹。Among them, as shown in FIG. 3 , in this embodiment, an image segmentation-long-short-term memory network (Segmentation-Long-short Term Memory, SEG-LSTM) is proposed to fuse multiple streams of historical frames and predict future trajectories of surrounding vehicles.

其中,LSTM网络的层数、每层LSTM网络的单元数、卷积神经网络的层数以及卷积核的尺寸都属于网络超参数,是经过交叉验证确定的。交叉验证的作用是确定最优的超参数,同时避免模型过拟合。示例性的,首先,将数据集分为训练集和测试集,比例为5:1。接着训练集均分为5部分,将每一部分轮流作为验证集,其余4部分作为训练集进行5次训练和验证,使用不同超参数可得到对应的平均准确率,取效果最优的超参数来确定其数值。Among them, the number of layers of the LSTM network, the number of units of each layer of the LSTM network, the number of layers of the convolutional neural network, and the size of the convolution kernel are network hyperparameters, which are determined by cross-validation. The role of cross-validation is to determine the optimal hyperparameters while avoiding model overfitting. Exemplarily, first, the data set is divided into a training set and a test set with a ratio of 5:1. Then the training set is divided into 5 parts, and each part is used as a verification set in turn, and the remaining 4 parts are used as a training set for 5 training and verification, and the corresponding average accuracy can be obtained by using different hyperparameters. Determine its value.

如图4所示,将视频序列按帧划分为多个时间步长的视频序列,并从每一帧的视频序列进行检测与跟踪得到位置信息,进行图像分割,得到语义信息。随后,将同一帧的位置信息和语义信息输入LSTM网络进行训练,通过对多个历史帧和当前帧的视频序列进行训练,得到预测轨迹,As shown in Figure 4, the video sequence is divided into multiple time-step video sequences by frame, and the position information is obtained from the video sequence of each frame by detection and tracking, and the semantic information is obtained by image segmentation. Subsequently, the position information and semantic information of the same frame are input into the LSTM network for training, and the predicted trajectory is obtained by training multiple historical frames and video sequences of the current frame.

206、通过该深度相机,分别获取该车辆与各该周围车辆的最小相对距离。根据该最小相对距离,将该二维空间预测轨迹转换为三维空间预测轨迹。206. Acquire minimum relative distances between the vehicle and each of the surrounding vehicles through the depth camera. According to the minimum relative distance, the two-dimensional spatial prediction trajectory is converted into a three-dimensional spatial prediction trajectory.

具体的,该预测轨迹为二维空间预测轨迹,该车辆中还设置有深度相机。Specifically, the predicted trajectory is a predicted trajectory in two-dimensional space, and a depth camera is also set in the vehicle.

其中,通过以下公式,根据该最小相对距离,将该二维空间预测轨迹转换为三维空间预测轨迹:Wherein, the two-dimensional spatial prediction trajectory is converted into a three-dimensional spatial prediction trajectory according to the minimum relative distance by the following formula:

其中,x,y,w,h分别表示二维空间预测轨迹在每一帧视频序列中的像素边界框中的元素,xr,yr,wr,hr分别表示三维空间预测轨迹在每一帧视频序列中的像素边界框中的元素,f表示为该深度相机的焦距,dmin表示为该车辆与各该周围车辆的最小相对距离。Among them, x, y, w, h respectively represent the elements in the pixel bounding box of the two-dimensional spatial prediction trajectory in each frame of video sequence, x r , y r , w r , h r represent the three-dimensional spatial prediction trajectory in each Elements in the pixel bounding box in a frame of video sequence, f represents the focal length of the depth camera, and d min represents the minimum relative distance between the vehicle and the surrounding vehicles.

其中,若忽略下标p,历史轨迹信息和预测轨迹可定义为一个三维空间占据网格,即Among them, if the subscript p is ignored, the historical trajectory information and predicted trajectory can be defined as a three-dimensional space occupying grid, namely

H,J∈R6={x,y,w,h,dmin,dmax}H,J∈R 6 ={x,y,w,h,d min ,d max }

式中,dmax表示该车辆与各该周围车辆的最大距离。In the formula, d max represents the maximum distance between the vehicle and the surrounding vehicles.

在本发明实施例中,首先,通过车载摄像头获取包括周围车辆和车辆背景的视频序列,并且将视频序列进行图像分割获取场景语义信息,接着将场景语义信息和历史轨迹信息输入神经网络模型获取预测轨迹,而不是采用静态图像来提取场景语义信息进行分析,从而保证了本实施例中神经网络模型的时间连续性,进而提高了预测车辆轨迹的准确度。另外,采用卷积神经网络和LSTM网络可提高对周围车辆追踪的鲁棒性,并且采用图像分割得到场景语义信息,可提高训练过程的可解释性。In the embodiment of the present invention, firstly, a video sequence including surrounding vehicles and vehicle background is acquired through the vehicle-mounted camera, and image segmentation is performed on the video sequence to obtain scene semantic information, and then the scene semantic information and historical track information are input into the neural network model to obtain prediction Trajectories, instead of using static images to extract scene semantic information for analysis, thereby ensuring the time continuity of the neural network model in this embodiment, and improving the accuracy of predicting vehicle trajectories. In addition, the use of convolutional neural network and LSTM network can improve the robustness of surrounding vehicle tracking, and the use of image segmentation to obtain scene semantic information can improve the interpretability of the training process.

请参阅图5,图5是本发明第三实施例提供的轨迹预测装置的结构示意图,应用于设置有车载摄像头的车辆。如图5所示,该轨迹预测装置主要包括:Please refer to FIG. 5 . FIG. 5 is a schematic structural diagram of a trajectory prediction device provided by a third embodiment of the present invention, which is applied to a vehicle equipped with a vehicle-mounted camera. As shown in Figure 5, the trajectory prediction device mainly includes:

获取模块301,用于利用车载摄像头对周围环境进行摄影,获取包括有周围车辆和车辆背景的视频序列。The acquisition module 301 is configured to use the vehicle camera to take pictures of the surrounding environment, and acquire video sequences including surrounding vehicles and vehicle backgrounds.

提取分割模块302,用于从视频序列中定位周围车辆并提取周围车辆的历史轨迹信息,将视频序列进行图像分割得到的场景语义信息作为辅助信息。The extraction and segmentation module 302 is configured to locate surrounding vehicles from the video sequence and extract historical trajectory information of the surrounding vehicles, and use scene semantic information obtained by image segmentation of the video sequence as auxiliary information.

输出模块303,用于将历史轨迹信息和辅助信息输入神经网络模型,得到周围车辆的预测轨迹。The output module 303 is used for inputting historical trajectory information and auxiliary information into the neural network model to obtain predicted trajectories of surrounding vehicles.

进一步地,神经网络模型包括卷积神经网络、第一层长短期记忆网络、第二层长短期记忆网络和全连接层,则,Further, the neural network model includes a convolutional neural network, a first layer of long-term short-term memory network, a second layer of long-term short-term memory network and a fully connected layer, then,

输出模块303,还用于将辅助信息输入给卷积神经网络,得到空间特征信息。The output module 303 is also used to input auxiliary information to the convolutional neural network to obtain spatial feature information.

输出模块303,还用于将历史轨迹信息输入第一层长短期记忆网络,得到时间特征信息。The output module 303 is also used to input the historical trajectory information into the first layer long short-term memory network to obtain time feature information.

输出模块303,还用于将空间特征信息和时间特征信息输入第二层长短期记忆网络,得到联合特征信息。The output module 303 is further configured to input the spatial feature information and the temporal feature information into the second-layer long-short-term memory network to obtain joint feature information.

输出模块303,还用于将联合特征信息输入全连接层,得到预测轨迹。The output module 303 is also used to input the joint feature information into the fully connected layer to obtain the predicted trajectory.

进一步地,神经网络模型包括以下公式:Further, the neural network model includes the following formula:

J←Mp(h,a):H×A。J←M p (h,a):H×A.

其中,J表示预测轨迹,M表示H、A与J之间的映射关系,H表示历史轨迹信息,A表示辅助信息,p表示周围车辆,h表示在第t帧视频序列中车辆p的位置信息,a表示在第t帧视频序列中车辆p的场景语义信息,j表示在从T+1帧起第t帧视频序列中车辆p的位置信息,t表示每帧。Among them, J represents the predicted trajectory, M represents the mapping relationship between H, A and J, H represents the historical trajectory information, A represents the auxiliary information, p represents the surrounding vehicles, h represents the position information of the vehicle p in the t-th frame video sequence , a represents the scene semantic information of vehicle p in the t-th frame video sequence, j represents the position information of vehicle p in the t-th frame video sequence from T+1 frame, and t represents each frame.

进一步地,预测轨迹为二维空间预测轨迹,车辆中还设置有深度相机,Further, the predicted trajectory is a two-dimensional space predicted trajectory, and a depth camera is also set in the vehicle,

获取模块301,还用于通过深度相机,分别获取车辆与各周围车辆的最小相对距离。The acquiring module 301 is further configured to respectively acquire the minimum relative distances between the vehicle and surrounding vehicles through the depth camera.

则装置还包括转换模块304,Then the device further includes a conversion module 304,

转换模块304,用于根据最小相对距离,将二维空间预测轨迹转换为三维空间预测轨迹。The conversion module 304 is configured to convert the two-dimensional spatial prediction trajectory into a three-dimensional spatial prediction trajectory according to the minimum relative distance.

进一步地,转换模块304,还用于通过以下公式,根据最小相对距离,将二维空间预测轨迹转换为三维空间预测轨迹:Further, the conversion module 304 is also used to convert the two-dimensional space prediction trajectory into the three-dimensional space prediction trajectory according to the minimum relative distance by the following formula:

其中,x,y,w,h分别表示二维空间预测轨迹在每一帧视频序列中的像素边界框中的元素,xr,yr,wr,hr分别表示三维空间预测轨迹在每一帧视频序列中的像素边界框中的元素,f表示为深度相机的焦距,dmin表示为车辆与各周围车辆的最小相对距离。Among them, x, y, w, h respectively represent the elements in the pixel bounding box of the two-dimensional spatial prediction trajectory in each frame of video sequence, x r , y r , w r , h r represent the three-dimensional spatial prediction trajectory in each The element in the pixel bounding box in a frame of video sequence, f represents the focal length of the depth camera, and d min represents the minimum relative distance between the vehicle and each surrounding vehicle.

上述模块实现各自功能的过程具体可参考上述如图1至图4所示实施例中的相关内容,此处不再赘述。For the process of the above-mentioned modules realizing their respective functions, reference may be made to relevant content in the above-mentioned embodiments shown in FIGS. 1 to 4 , which will not be repeated here.

在本发明实施例中,通过车载摄像头获取包括周围车辆和车辆背景的视频序列,并且将视频序列进行图像分割获取场景语义信息,接着将场景语义信息和历史轨迹信息输入神经网络模型获取预测轨迹,而不是采用静态图像来提取场景语义信息进行分析,从而保证了本实施例中神经网络模型的时间连续性,进而提高了预测车辆轨迹的准确度。In the embodiment of the present invention, the video sequence including the surrounding vehicles and the vehicle background is obtained through the vehicle-mounted camera, and the video sequence is image-segmented to obtain scene semantic information, and then the scene semantic information and historical trajectory information are input into the neural network model to obtain the predicted trajectory, Rather than using static images to extract scene semantic information for analysis, the time continuity of the neural network model in this embodiment is ensured, thereby improving the accuracy of predicting vehicle trajectories.

在本申请所提供的多个实施例中,应该理解到,所揭露的方法及装置,可以通过其他的方式实现。例如,以上所描述的实施例仅仅是示意性的,例如,所述模块的划分,仅仅作为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信链接可以使通过一些接口,模块的间接耦合或通信链接,可以是电性,机械或其他的形式。In the multiple embodiments provided in this application, it should be understood that the disclosed methods and devices may be implemented in other ways. For example, the above-described embodiments are only illustrative. For example, the division of the modules is only a logical function division. In actual implementation, there may be other division methods. For example, multiple modules or components can be combined or can be Integrate into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication link shown or discussed may be through some interfaces, and the indirect coupling or communication link between modules may be in electrical, mechanical or other forms.

所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or may be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中。也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的方式实现,也可以采用软件功能模块的形式实现。In addition, each functional module in each embodiment of the present invention can be integrated into one processing module. Each module may also exist separately physically, or two or more modules may be integrated into one module. The above-mentioned integrated modules can be implemented not only in the form of hardware, but also in the form of software function modules.

需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本发明所必须的。It should be noted that, for the sake of simplicity of description, the aforementioned method embodiments are expressed as a series of action combinations, but those skilled in the art should know that the present invention is not limited by the described action sequence. Because of the present invention, certain steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification belong to preferred embodiments, and the actions and modules involved are not necessarily required by the present invention.

在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the descriptions of each embodiment have their own emphases, and for parts not described in detail in a certain embodiment, reference may be made to relevant descriptions of other embodiments.

以上为对本发明所提供的一种轨迹预测方法及装置、终端及计算机可读存储介质的描述,对于本领域的一般技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本发明的限制。The above is a description of a trajectory prediction method and device, terminal, and computer-readable storage medium provided by the present invention. For those of ordinary skill in the art, based on the ideas of the embodiments of the present invention, both in terms of specific implementation and application range There will be changes. In summary, the contents of this specification should not be construed as limiting the present invention.

Claims (10)

1.一种轨迹预测方法,应用于设置有车载摄像头的车辆,其特征在于,所述方法包括:1. A trajectory prediction method, applied to a vehicle provided with a vehicle-mounted camera, characterized in that the method comprises: 利用车载摄像头对周围环境进行摄影,获取包括有周围车辆和车辆背景的视频序列;Use the on-board camera to take pictures of the surrounding environment, and obtain video sequences including surrounding vehicles and vehicle backgrounds; 从所述视频序列中定位所述周围车辆并提取所述周围车辆的历史轨迹信息,将所述视频序列进行图像分割得到的场景语义信息作为辅助信息;Locating the surrounding vehicles from the video sequence and extracting the historical trajectory information of the surrounding vehicles, and using the scene semantic information obtained by image segmentation of the video sequence as auxiliary information; 将所述历史轨迹信息和所述辅助信息输入神经网络模型,得到所述周围车辆的预测轨迹。Inputting the historical trajectory information and the auxiliary information into the neural network model to obtain the predicted trajectory of the surrounding vehicles. 2.如权利要求1所述的轨迹预测方法,其特征在于,所述神经网络模型包括卷积神经网络、第一层长短期记忆网络、第二层长短期记忆网络和全连接层,则所述将所述历史轨迹信息和所述辅助信息输入神经网络模型,得到所述周围车辆的预测轨迹包括:2. trajectory prediction method as claimed in claim 1, is characterized in that, described neural network model comprises convolutional neural network, first layer long short-term memory network, second layer long short-term memory network and fully connected layer, then said Said inputting the historical trajectory information and the auxiliary information into the neural network model to obtain the predicted trajectory of the surrounding vehicles includes: 将所述辅助信息输入给所述卷积神经网络,得到空间特征信息;Inputting the auxiliary information into the convolutional neural network to obtain spatial feature information; 将所述历史轨迹信息输入所述第一层长短期记忆网络,得到时间特征信息;Inputting the historical trajectory information into the first layer long short-term memory network to obtain time feature information; 将所述空间特征信息和所述时间特征信息输入所述第二层长短期记忆网络,得到联合特征信息;inputting the spatial feature information and the temporal feature information into the second-layer long-short-term memory network to obtain joint feature information; 将所述联合特征信息输入全连接层,得到所述预测轨迹。The joint feature information is input into a fully connected layer to obtain the predicted trajectory. 3.如权利要求1所述的轨迹预测方法,其特征在于,所述神经网络模型包括以下公式:3. trajectory prediction method as claimed in claim 1, is characterized in that, described neural network model comprises following formula: J←Mp(h,a):H×A;J←M p (h,a):H×A; 其中,J表示所述预测轨迹,M表示H、A与J之间的映射关系,H表示所述历史轨迹信息,A表示所述辅助信息,p表示所述周围车辆,h表示在第t帧视频序列中车辆p的位置信息,a表示在第t帧视频序列中车辆p的场景语义信息,j表示在从T+1帧起第t帧视频序列中车辆p的位置信息,t表示每帧。Among them, J represents the predicted trajectory, M represents the mapping relationship between H, A and J, H represents the historical trajectory information, A represents the auxiliary information, p represents the surrounding vehicles, h represents the tth frame The position information of the vehicle p in the video sequence, a represents the scene semantic information of the vehicle p in the t-th frame of the video sequence, j represents the position information of the vehicle p in the t-th frame of the video sequence from the T+1 frame, and t represents each frame . 4.如权利要求1所述的轨迹预测方法,其特征在于,所述预测轨迹为二维空间预测轨迹,所述车辆中还设置有深度相机,则所述方法还包括:4. The trajectory prediction method according to claim 1, wherein the predicted trajectory is a two-dimensional space predicted trajectory, and a depth camera is also arranged in the vehicle, then the method also includes: 通过所述深度相机,分别获取所述车辆与各所述周围车辆的最小相对距离;Obtaining the minimum relative distances between the vehicle and each of the surrounding vehicles through the depth camera; 根据所述最小相对距离,将所述二维空间预测轨迹转换为三维空间预测轨迹。Converting the two-dimensional spatial prediction trajectory into a three-dimensional spatial prediction trajectory according to the minimum relative distance. 5.如权利要求4所述的轨迹预测方法,其特征在于,通过以下公式,根据所述最小相对距离,将所述二维空间预测轨迹转换为三维空间预测轨迹:5. trajectory prediction method as claimed in claim 4, is characterized in that, by following formula, according to described minimum relative distance, described two-dimensional spatial prediction trajectory is converted into three-dimensional spatial prediction trajectory: 其中,x,y,w,h分别表示二维空间预测轨迹在每一帧视频序列中的像素边界框中的元素,xr,yr,wr,hr分别表示三维空间预测轨迹在每一帧视频序列中的像素边界框中的元素,f表示为所述深度相机的焦距,dmin表示为所述车辆与各所述周围车辆的最小相对距离。Among them, x, y, w, h respectively represent the elements in the pixel bounding box of the two-dimensional spatial prediction trajectory in each frame of video sequence, x r , y r , w r , h r represent the three-dimensional spatial prediction trajectory in each Elements in the pixel bounding box in a frame of video sequence, f represents the focal length of the depth camera, and d min represents the minimum relative distance between the vehicle and each of the surrounding vehicles. 6.一种轨迹预测装置,应用于设置有车载摄像头的车辆,其特征在于,所述装置包括:6. A trajectory prediction device applied to a vehicle provided with a vehicle-mounted camera, characterized in that the device comprises: 获取模块,用于利用车载摄像头对周围环境进行摄影,获取包括有周围车辆和车辆背景的视频序列;The acquisition module is used to use the vehicle camera to take pictures of the surrounding environment, and acquire video sequences including surrounding vehicles and vehicle backgrounds; 提取分割模块,用于从所述视频序列中定位所述周围车辆并提取所述周围车辆的历史轨迹信息,将所述视频序列进行图像分割得到的场景语义信息作为辅助信息;An extraction and segmentation module, configured to locate the surrounding vehicles from the video sequence and extract historical track information of the surrounding vehicles, and use the scene semantic information obtained by image segmentation of the video sequence as auxiliary information; 输出模块,用于将所述历史轨迹信息和所述辅助信息输入神经网络模型,得到所述周围车辆的预测轨迹。The output module is used to input the historical trajectory information and the auxiliary information into the neural network model to obtain the predicted trajectory of the surrounding vehicles. 7.如权利要求6所述的轨迹预测装置,其特征在于,所述神经网络模型包括卷积神经网络、第一层长短期记忆网络、第二层长短期记忆网络和全连接层,则,7. trajectory prediction device as claimed in claim 6, is characterized in that, described neural network model comprises convolutional neural network, the first layer of long short-term memory network, the second layer of long short-term memory network and fully connected layer, then, 所述输出模块,还用于将所述辅助信息输入给所述卷积神经网络,得到空间特征信息;The output module is further configured to input the auxiliary information to the convolutional neural network to obtain spatial feature information; 所述输出模块,还用于将所述历史轨迹信息输入所述第一层长短期记忆网络,得到时间特征信息;The output module is also used to input the historical trajectory information into the first layer long short-term memory network to obtain time feature information; 所述输出模块,还用于将所述空间特征信息和所述时间特征信息输入所述第二层长短期记忆网络,得到联合特征信息;The output module is further configured to input the spatial feature information and the temporal feature information into the second-layer long-short-term memory network to obtain joint feature information; 所述输出模块,还用于将所述联合特征信息输入全连接层,得到所述预测轨迹。The output module is further configured to input the joint feature information into the fully connected layer to obtain the predicted trajectory. 8.如权利要求6所述的轨迹预测装置,其特征在于,所述神经网络模型包括以下公式:8. trajectory prediction device as claimed in claim 6, is characterized in that, described neural network model comprises following formula: J←Mp(h,a):H×A;J←M p (h,a):H×A; 其中,J表示所述预测轨迹,M表示H、A与J之间的映射关系,H表示所述历史轨迹信息,A表示所述辅助信息,p表示所述周围车辆,h表示在第t帧视频序列中车辆p的位置信息,a表示在第t帧视频序列中车辆p的场景语义信息,j表示在从T+1帧起第t帧视频序列中车辆p的位置信息,t表示每帧。Among them, J represents the predicted trajectory, M represents the mapping relationship between H, A and J, H represents the historical trajectory information, A represents the auxiliary information, p represents the surrounding vehicles, h represents the tth frame The position information of the vehicle p in the video sequence, a represents the scene semantic information of the vehicle p in the t-th frame of the video sequence, j represents the position information of the vehicle p in the t-th frame of the video sequence from the T+1 frame, and t represents each frame . 9.如权利要求6所述的轨迹预测装置,其特征在于,所述预测轨迹为二维空间预测轨迹,所述车辆中还设置有深度相机,9. The trajectory prediction device according to claim 6, wherein the predicted trajectory is a two-dimensional space predicted trajectory, and a depth camera is also arranged in the vehicle, 所述获取模块,还用于通过所述深度相机,分别获取所述车辆与各所述周围车辆的最小相对距离;The acquiring module is further configured to respectively acquire the minimum relative distances between the vehicle and each of the surrounding vehicles through the depth camera; 则所述装置还包括转换模块,Then the device further includes a conversion module, 所述转换模块,用于根据所述最小相对距离,将所述二维空间预测轨迹转换为三维空间预测轨迹。The conversion module is configured to convert the two-dimensional spatial prediction trajectory into a three-dimensional spatial prediction trajectory according to the minimum relative distance. 10.如权利要求9所述的轨迹预测装置,其特征在于,10. The trajectory prediction device according to claim 9, wherein: 所述转换模块,还用于通过以下公式,根据所述最小相对距离,将所述二维空间预测轨迹转换为三维空间预测轨迹:The conversion module is further configured to convert the two-dimensional spatial prediction trajectory into a three-dimensional spatial prediction trajectory according to the minimum relative distance through the following formula: 其中,x,y,w,h分别表示二维空间预测轨迹在每一帧视频序列中的像素边界框中的元素,xr,yr,wr,hr分别表示三维空间预测轨迹在每一帧视频序列中的像素边界框中的元素,f表示为所述深度相机的焦距,dmin表示为所述车辆与各所述周围车辆的最小相对距离。Among them, x, y, w, h respectively represent the elements in the pixel bounding box of the two-dimensional spatial prediction trajectory in each frame of video sequence, x r , y r , w r , h r represent the three-dimensional spatial prediction trajectory in each Elements in the pixel bounding box in a frame of video sequence, f represents the focal length of the depth camera, and d min represents the minimum relative distance between the vehicle and each of the surrounding vehicles.
CN201810752554.5A 2018-07-10 2018-07-10 Trajectory prediction method and apparatus Active CN108803617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810752554.5A CN108803617B (en) 2018-07-10 2018-07-10 Trajectory prediction method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810752554.5A CN108803617B (en) 2018-07-10 2018-07-10 Trajectory prediction method and apparatus

Publications (2)

Publication Number Publication Date
CN108803617A true CN108803617A (en) 2018-11-13
CN108803617B CN108803617B (en) 2020-03-20

Family

ID=64075916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810752554.5A Active CN108803617B (en) 2018-07-10 2018-07-10 Trajectory prediction method and apparatus

Country Status (1)

Country Link
CN (1) CN108803617B (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523574A (en) * 2018-12-27 2019-03-26 联想(北京)有限公司 A kind of run trace prediction technique and electronic equipment
CN109583151A (en) * 2019-02-20 2019-04-05 百度在线网络技术(北京)有限公司 The driving trace prediction technique and device of vehicle
CN109631915A (en) * 2018-12-19 2019-04-16 百度在线网络技术(北京)有限公司 Trajectory predictions method, apparatus, equipment and computer readable storage medium
CN109885066A (en) * 2019-03-26 2019-06-14 北京经纬恒润科技有限公司 A kind of motion profile prediction technique and device
CN110007675A (en) * 2019-04-12 2019-07-12 北京航空航天大学 A vehicle automatic driving decision-making system based on driving situation map and the preparation method of training set based on UAV
CN110223318A (en) * 2019-04-28 2019-09-10 驭势科技(北京)有限公司 A kind of prediction technique of multi-target track, device, mobile unit and storage medium
CN110262486A (en) * 2019-06-11 2019-09-20 北京三快在线科技有限公司 A kind of unmanned equipment moving control method and device
CN110275531A (en) * 2019-06-21 2019-09-24 北京三快在线科技有限公司 The trajectory predictions method, apparatus and unmanned equipment of barrier
WO2020010517A1 (en) * 2018-07-10 2020-01-16 深圳大学 Trajectory prediction method and apparatus
CN110834645A (en) * 2019-10-30 2020-02-25 中国第一汽车股份有限公司 Free space determination method and device for vehicle, storage medium and vehicle
CN110852342A (en) * 2019-09-26 2020-02-28 京东城市(北京)数字科技有限公司 Road network data acquisition method, device, equipment and computer storage medium
CN111114554A (en) * 2019-12-16 2020-05-08 苏州智加科技有限公司 Driving trajectory prediction method, device, terminal and storage medium
CN111114543A (en) * 2020-03-26 2020-05-08 北京三快在线科技有限公司 Trajectory prediction method and device
CN111260122A (en) * 2020-01-13 2020-06-09 重庆首讯科技股份有限公司 Method and device for predicting traffic flow on expressway
JP2020095612A (en) * 2018-12-14 2020-06-18 株式会社小松製作所 Transport vehicle management system and transport vehicle management method
CN111316286A (en) * 2019-03-27 2020-06-19 深圳市大疆创新科技有限公司 Trajectory prediction method and device, storage medium, driving system and vehicle
WO2020164089A1 (en) * 2019-02-15 2020-08-20 Bayerische Motoren Werke Aktiengesellschaft Trajectory prediction using deep learning multiple predictor fusion and bayesian optimization
CN111595352A (en) * 2020-05-14 2020-08-28 陕西重型汽车有限公司 Track prediction method based on environment perception and vehicle driving intention
CN111738037A (en) * 2019-03-25 2020-10-02 广州汽车集团股份有限公司 An automatic driving method, system and vehicle thereof
CN112078592A (en) * 2019-06-13 2020-12-15 初速度(苏州)科技有限公司 Method and device for predicting vehicle behavior and/or vehicle track
CN112562331A (en) * 2020-11-30 2021-03-26 的卢技术有限公司 Vision perception-based other-party vehicle track prediction method
CN112558608A (en) * 2020-12-11 2021-03-26 重庆邮电大学 Vehicle-mounted machine cooperative control and path optimization method based on unmanned aerial vehicle assistance
CN112784628A (en) * 2019-11-06 2021-05-11 北京地平线机器人技术研发有限公司 Trajectory prediction method, and neural network training method and device for trajectory prediction
WO2021134354A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Path prediction method and apparatus, computer device, and storage medium
CN113496268A (en) * 2020-04-08 2021-10-12 北京图森智途科技有限公司 Trajectory prediction method and device
WO2021204092A1 (en) * 2020-04-10 2021-10-14 商汤集团有限公司 Track prediction method and apparatus, and device and storage medium
CN113554060A (en) * 2021-06-24 2021-10-26 福建师范大学 A Trajectory Prediction Method of LSTM Neural Network Fusion with DTW
WO2022033650A1 (en) 2020-08-10 2022-02-17 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Device for and method of predicting a trajectory for a vehicle
CN114387782A (en) * 2022-01-12 2022-04-22 智道网联科技(北京)有限公司 Method and device for predicting traffic state and electronic equipment
CN114460943A (en) * 2022-02-10 2022-05-10 山东大学 Self-adaptive target navigation method and system for service robot
US11351996B2 (en) 2019-11-01 2022-06-07 Denso International America, Inc. Trajectory prediction of surrounding vehicles using predefined routes
CN115881286A (en) * 2023-02-21 2023-03-31 创意信息技术股份有限公司 Epidemic prevention management scheduling system
US11650072B2 (en) 2019-11-26 2023-05-16 International Business Machines Corporation Portable lane departure detection
CN117280390A (en) * 2021-04-28 2023-12-22 宝马汽车股份有限公司 Method and apparatus for predicting object data of a subject
CN117764815A (en) * 2023-12-25 2024-03-26 上海人工智能创新中心 Model training method, video prediction method, device, equipment and storage medium
CN118397588A (en) * 2024-06-27 2024-07-26 深圳觉明人工智能有限公司 Camera scene analysis method, system, equipment and medium for intelligent driving automobile

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105700538A (en) * 2016-01-28 2016-06-22 武汉光庭信息技术股份有限公司 A track following method based on a neural network and a PID algorithm
CN106873580A (en) * 2015-11-05 2017-06-20 福特全球技术公司 Based on perception data autonomous driving at the intersection
CN106952303A (en) * 2017-03-09 2017-07-14 北京旷视科技有限公司 Vehicle distance detection method, device and system
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
US20180023960A1 (en) * 2016-07-21 2018-01-25 Mobileye Vision Technologies Ltd. Distributing a crowdsourced sparse map for autonomous vehicle navigation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873580A (en) * 2015-11-05 2017-06-20 福特全球技术公司 Based on perception data autonomous driving at the intersection
CN105700538A (en) * 2016-01-28 2016-06-22 武汉光庭信息技术股份有限公司 A track following method based on a neural network and a PID algorithm
US20180023960A1 (en) * 2016-07-21 2018-01-25 Mobileye Vision Technologies Ltd. Distributing a crowdsourced sparse map for autonomous vehicle navigation
CN106952303A (en) * 2017-03-09 2017-07-14 北京旷视科技有限公司 Vehicle distance detection method, device and system
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020010517A1 (en) * 2018-07-10 2020-01-16 深圳大学 Trajectory prediction method and apparatus
JP7610916B2 (en) 2018-12-14 2025-01-09 株式会社小松製作所 Transport vehicle management system and transport vehicle management method
WO2020122212A1 (en) * 2018-12-14 2020-06-18 株式会社小松製作所 Transport vehicle management system and transport vehicle management method
JP2020095612A (en) * 2018-12-14 2020-06-18 株式会社小松製作所 Transport vehicle management system and transport vehicle management method
CN109631915A (en) * 2018-12-19 2019-04-16 百度在线网络技术(北京)有限公司 Trajectory predictions method, apparatus, equipment and computer readable storage medium
CN109523574A (en) * 2018-12-27 2019-03-26 联想(北京)有限公司 A kind of run trace prediction technique and electronic equipment
CN113424209B (en) * 2019-02-15 2023-12-22 宝马股份公司 Trajectory prediction using deep learning multi-predictor fusion and Bayesian optimization
CN113424209A (en) * 2019-02-15 2021-09-21 宝马股份公司 Trajectory prediction using deep learning multi-predictor fusion and bayesian optimization
WO2020164089A1 (en) * 2019-02-15 2020-08-20 Bayerische Motoren Werke Aktiengesellschaft Trajectory prediction using deep learning multiple predictor fusion and bayesian optimization
CN109583151A (en) * 2019-02-20 2019-04-05 百度在线网络技术(北京)有限公司 The driving trace prediction technique and device of vehicle
CN111738037B (en) * 2019-03-25 2024-03-08 广州汽车集团股份有限公司 An automatic driving method, system and vehicle thereof
CN111738037A (en) * 2019-03-25 2020-10-02 广州汽车集团股份有限公司 An automatic driving method, system and vehicle thereof
CN109885066A (en) * 2019-03-26 2019-06-14 北京经纬恒润科技有限公司 A kind of motion profile prediction technique and device
CN111316286B (en) * 2019-03-27 2024-09-10 深圳市卓驭科技有限公司 Trajectory prediction method and device, storage medium, driving system and vehicle
CN111316286A (en) * 2019-03-27 2020-06-19 深圳市大疆创新科技有限公司 Trajectory prediction method and device, storage medium, driving system and vehicle
CN110007675A (en) * 2019-04-12 2019-07-12 北京航空航天大学 A vehicle automatic driving decision-making system based on driving situation map and the preparation method of training set based on UAV
CN110223318A (en) * 2019-04-28 2019-09-10 驭势科技(北京)有限公司 A kind of prediction technique of multi-target track, device, mobile unit and storage medium
CN110262486A (en) * 2019-06-11 2019-09-20 北京三快在线科技有限公司 A kind of unmanned equipment moving control method and device
CN112078592A (en) * 2019-06-13 2020-12-15 初速度(苏州)科技有限公司 Method and device for predicting vehicle behavior and/or vehicle track
CN112078592B (en) * 2019-06-13 2021-12-24 魔门塔(苏州)科技有限公司 Method and device for predicting vehicle behavior and/or vehicle track
CN110275531A (en) * 2019-06-21 2019-09-24 北京三快在线科技有限公司 The trajectory predictions method, apparatus and unmanned equipment of barrier
CN110852342B (en) * 2019-09-26 2020-11-24 京东城市(北京)数字科技有限公司 Road network data acquisition method, device, equipment and computer storage medium
CN110852342A (en) * 2019-09-26 2020-02-28 京东城市(北京)数字科技有限公司 Road network data acquisition method, device, equipment and computer storage medium
CN110834645B (en) * 2019-10-30 2021-06-29 中国第一汽车股份有限公司 Free space determination method and device for vehicle, storage medium and vehicle
CN110834645A (en) * 2019-10-30 2020-02-25 中国第一汽车股份有限公司 Free space determination method and device for vehicle, storage medium and vehicle
US11351996B2 (en) 2019-11-01 2022-06-07 Denso International America, Inc. Trajectory prediction of surrounding vehicles using predefined routes
CN112784628B (en) * 2019-11-06 2024-03-19 北京地平线机器人技术研发有限公司 Track prediction method, neural network training method and device for track prediction
CN112784628A (en) * 2019-11-06 2021-05-11 北京地平线机器人技术研发有限公司 Trajectory prediction method, and neural network training method and device for trajectory prediction
US11650072B2 (en) 2019-11-26 2023-05-16 International Business Machines Corporation Portable lane departure detection
CN111114554A (en) * 2019-12-16 2020-05-08 苏州智加科技有限公司 Driving trajectory prediction method, device, terminal and storage medium
CN111114554B (en) * 2019-12-16 2021-06-11 苏州智加科技有限公司 Method, device, terminal and storage medium for predicting travel track
WO2021134354A1 (en) * 2019-12-30 2021-07-08 深圳元戎启行科技有限公司 Path prediction method and apparatus, computer device, and storage medium
CN113811830B (en) * 2019-12-30 2022-05-10 深圳元戎启行科技有限公司 Trajectory prediction method, device, computer equipment and storage medium
CN113811830A (en) * 2019-12-30 2021-12-17 深圳元戎启行科技有限公司 Trajectory prediction method, device, computer equipment and storage medium
CN111260122A (en) * 2020-01-13 2020-06-09 重庆首讯科技股份有限公司 Method and device for predicting traffic flow on expressway
CN111114543A (en) * 2020-03-26 2020-05-08 北京三快在线科技有限公司 Trajectory prediction method and device
CN113496268A (en) * 2020-04-08 2021-10-12 北京图森智途科技有限公司 Trajectory prediction method and device
JP2022549952A (en) * 2020-04-10 2022-11-29 センスタイム グループ リミテッド Trajectory prediction method, device, equipment and storage media resource
JP7338052B2 (en) 2020-04-10 2023-09-04 センスタイム グループ リミテッド Trajectory prediction method, device, equipment and storage media resource
WO2021204092A1 (en) * 2020-04-10 2021-10-14 商汤集团有限公司 Track prediction method and apparatus, and device and storage medium
US12311970B2 (en) 2020-04-10 2025-05-27 Sensetime Group Limited Method and apparatus for trajectory prediction, device and storage medium
CN111595352B (en) * 2020-05-14 2021-09-28 陕西重型汽车有限公司 Track prediction method based on environment perception and vehicle driving intention
CN111595352A (en) * 2020-05-14 2020-08-28 陕西重型汽车有限公司 Track prediction method based on environment perception and vehicle driving intention
WO2022033650A1 (en) 2020-08-10 2022-02-17 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Device for and method of predicting a trajectory for a vehicle
CN112562331A (en) * 2020-11-30 2021-03-26 的卢技术有限公司 Vision perception-based other-party vehicle track prediction method
CN112558608B (en) * 2020-12-11 2023-03-17 重庆邮电大学 Vehicle-mounted machine cooperative control and path optimization method based on unmanned aerial vehicle assistance
CN112558608A (en) * 2020-12-11 2021-03-26 重庆邮电大学 Vehicle-mounted machine cooperative control and path optimization method based on unmanned aerial vehicle assistance
CN117280390A (en) * 2021-04-28 2023-12-22 宝马汽车股份有限公司 Method and apparatus for predicting object data of a subject
CN113554060A (en) * 2021-06-24 2021-10-26 福建师范大学 A Trajectory Prediction Method of LSTM Neural Network Fusion with DTW
CN113554060B (en) * 2021-06-24 2023-06-20 福建师范大学 A Trajectory Prediction Method of LSTM Neural Network Based on DTW
CN114387782A (en) * 2022-01-12 2022-04-22 智道网联科技(北京)有限公司 Method and device for predicting traffic state and electronic equipment
CN114460943A (en) * 2022-02-10 2022-05-10 山东大学 Self-adaptive target navigation method and system for service robot
CN114460943B (en) * 2022-02-10 2023-07-28 山东大学 Self-adaptive target navigation method and system for service robot
CN115881286B (en) * 2023-02-21 2023-06-16 创意信息技术股份有限公司 Epidemic prevention management scheduling system
CN115881286A (en) * 2023-02-21 2023-03-31 创意信息技术股份有限公司 Epidemic prevention management scheduling system
CN117764815A (en) * 2023-12-25 2024-03-26 上海人工智能创新中心 Model training method, video prediction method, device, equipment and storage medium
CN118397588A (en) * 2024-06-27 2024-07-26 深圳觉明人工智能有限公司 Camera scene analysis method, system, equipment and medium for intelligent driving automobile

Also Published As

Publication number Publication date
CN108803617B (en) 2020-03-20

Similar Documents

Publication Publication Date Title
CN108803617A (en) Trajectory predictions method and device
JP7281015B2 (en) Parametric top view representation of complex road scenes
CN108665496B (en) An end-to-end semantic instant localization and mapping method based on deep learning
US11940803B2 (en) Method, apparatus and computer storage medium for training trajectory planning model
CN111292366B (en) Visual driving ranging algorithm based on deep learning and edge calculation
WO2021218786A1 (en) Data processing system, object detection method and apparatus thereof
Kim et al. Vision-based real-time obstacle segmentation algorithm for autonomous surface vehicle
CN110555420B (en) Fusion model network and method based on pedestrian regional feature extraction and re-identification
CN115588175A (en) Aerial view characteristic generation method based on vehicle-mounted all-around image
US12340520B2 (en) System and method for motion prediction in autonomous driving
CN107633220A (en) A kind of vehicle front target identification method based on convolutional neural networks
CN113077505A (en) Optimization method of monocular depth estimation network based on contrast learning
WO2022000469A1 (en) Method and apparatus for 3d object detection and segmentation based on stereo vision
CN114973199A (en) Rail transit train obstacle detection method based on convolutional neural network
CN107274445A (en) A kind of image depth estimation method and system
JP2019149142A (en) Object marking system and object marking method
CN115115713A (en) Unified space-time fusion all-around aerial view perception method
CN111967373A (en) Self-adaptive enhanced fusion real-time instance segmentation method based on camera and laser radar
CN111354030A (en) Method for generating unsupervised monocular image depth map embedded into SENET unit
CN115249269A (en) Object detection method, computer program product, storage medium, and electronic device
CN116625383A (en) A road vehicle perception method based on multi-sensor fusion
Dwivedi et al. Bird's Eye View Segmentation Using Lifted 2D Semantic Features.
Feng et al. Polarpoint-bev: Bird-eye-view perception in polar points for explainable end-to-end autonomous driving
CN112233079A (en) Method and system for multi-sensor image fusion
Gong et al. SkipcrossNets: Adaptive Skip-Cross Fusion for Road Detection: Y. Gong et al.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant