[go: up one dir, main page]

CN110009903B - Traffic accident scene restoration method - Google Patents

Traffic accident scene restoration method Download PDF

Info

Publication number
CN110009903B
CN110009903B CN201910162408.1A CN201910162408A CN110009903B CN 110009903 B CN110009903 B CN 110009903B CN 201910162408 A CN201910162408 A CN 201910162408A CN 110009903 B CN110009903 B CN 110009903B
Authority
CN
China
Prior art keywords
vehicle
accident
data
wheel
traffic accident
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910162408.1A
Other languages
Chinese (zh)
Other versions
CN110009903A (en
Inventor
王平
刘富强
陈新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Beijing Automotive Research Institute Co Ltd
Original Assignee
Tongji University
Beijing Automotive Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University, Beijing Automotive Research Institute Co Ltd filed Critical Tongji University
Priority to CN201910162408.1A priority Critical patent/CN110009903B/en
Publication of CN110009903A publication Critical patent/CN110009903A/en
Application granted granted Critical
Publication of CN110009903B publication Critical patent/CN110009903B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • G06T11/23
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明涉及一种交通事故现场还原方法,包括以下步骤:S1、利用设置在车上的GPS装置和传感器采集的数据绘制出事故发生前一分钟车辆的行驶轨迹;S2、以绘制出的事故发生前一分钟车辆的行驶轨迹和发生事故一瞬间的车辆记录仪中的数据作为输入,运用深度学习算法得到还原的车辆行驶轨迹。与现有技术相比,本发明利用安装部署在车辆上车载装置采集的数据,经过反复训练的深度学习算法,能有效去除随机误差和人为主观因素的干扰,大大降低事故误判率,省去了不必要的时间、人力的浪费,对维护良好的社会秩序有极大促进作用。

Figure 201910162408

The present invention relates to a method for restoring a traffic accident scene, comprising the following steps: S1, using a GPS device arranged on the vehicle and data collected by a sensor to draw the driving track of the vehicle one minute before the accident occurs; S2, using the drawn accident occurrence The driving trajectory of the vehicle in the previous minute and the data in the vehicle recorder at the moment of the accident are used as input, and the restored vehicle driving trajectory is obtained by using the deep learning algorithm. Compared with the prior art, the present invention utilizes the data collected by the on-board device installed and deployed on the vehicle, and the deep learning algorithm after repeated training can effectively remove the interference of random errors and human subjective factors, greatly reduce the accident misjudgment rate, and save the It saves unnecessary waste of time and manpower, and greatly promotes the maintenance of good social order.

Figure 201910162408

Description

Traffic accident scene restoration method
Technical Field
The invention relates to a traffic accident scene simulation and reproduction technology, in particular to a traffic accident scene restoration method.
Background
The main task of the simulation and reproduction of the traffic accident is to estimate the running speed, the acceleration, the course angle, the running track of the vehicle after the vehicle is collided and the like in the collision process by using the running marks left on the road, the collision marks on the vehicle body and the stopping position of the vehicle at the accident site so as to estimate the occurrence reason of the traffic accident, determine the accident responsibility and carry out corresponding punishment and the like. This process is a vehicle travel track reproduction technique.
Under the background of rapid development of current transportation, the occurrence frequency of traffic accidents is high. In a traffic accident, if the two parties of the traffic accident can not agree on who is the main responsible party of the accident in the current accident, the light accident often falls into a rather troublesome processing flow and must wait for the traffic police to arrive at the scene. In order to obtain specific information of an accident site, a traffic police needs to measure and evaluate the damage condition of a vehicle on the site, the stop position of the vehicle and the like so as to realize simulation and reproduction of the accident site. However, this method usually consumes a lot of time for both parties, so that the following vehicles cannot run normally to cause traffic jam, and also causes a certain waste of police resources. Moreover, the artificial measurement process includes artificial subjectivity and objective errors, and the result of accident reproduction often deviates greatly from the actual situation. In the existing research on vehicle track reproduction in traffic accidents, the accident scene is reproduced by collecting video based on a camera installed and deployed on a vehicle, and the track reproduction is realized by modeling the vehicle only through the stop position of the vehicle. However, these methods ignore the system dynamic response of the vehicle in operation and are therefore often less than satisfactory.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a traffic accident scene restoration method.
The purpose of the invention can be realized by the following technical scheme:
a traffic accident scene restoration method comprises the following steps:
s1, drawing the driving track of the vehicle one minute before the accident by using the data collected by the GPS device and the sensor arranged on the vehicle;
and S2, taking the drawn driving track of the vehicle one minute before the accident and the data in the vehicle recorder at the moment of the accident as input, and obtaining the restored driving track of the vehicle by applying a deep learning algorithm.
Preferably, in step S1, the traveling direction of the vehicle is determined by the correspondence between the difference between the tension sensors mounted on both sides of the wheel and the turning direction and angle of the vehicle, and the emergency braking of the vehicle is determined by the pressure sensors mounted on the wheel.
Preferably, data acquired by the sensor in the method is filtered by a Kalman filtering algorithm and then is subjected to other processing.
Preferably, the deep learning algorithm is specifically a neural network algorithm.
Preferably, the neural network algorithm specifically adopts LSTM to build a recurrent neural network.
Preferably, the data collected by the GPS device and the sensor includes: position of vehicle, vehicle speed, acceleration, course angle, tire pressure, tire surface tension, yaw rate, angular acceleration.
Preferably, the restored vehicle driving track is specifically obtained through simulation of an OpenCv algorithm.
Preferably, the restored vehicle driving track is sent to a traffic police or a mobile terminal of an owner of the vehicle.
Compared with the prior art, the invention has the following advantages:
1. by using data acquired by a vehicle-mounted device installed and deployed on a vehicle and a repeatedly trained deep learning algorithm, random errors and interference of human subjective factors can be effectively removed, and accident misjudgment rate is greatly reduced; the mobile phone terminal is matched to directly generate a simulation reproduction picture of the vehicle running track of an accident scene, so that two accident parties do not need to wait for traffic polices in situ, congestion possibly caused by the accident is greatly reduced, unnecessary waste of time and manpower is saved, and good social order maintenance is greatly promoted.
2. High accuracy: the method completely avoids subjective assumption during accident responsibility confirmation from the realization method, eliminates random errors and is beneficial to treating traffic accidents fairly and equitably.
3. High adaptability: the method adopts a neural network model with large data flow, so that the adaptability of the method is greatly improved, and after full training, the method can be competent for route prediction under any condition; meanwhile, each successful prediction can be brought into a new data set, so that the neural network is continuously updated, and the accuracy of the neural network can be ensured without spending manpower on maintenance.
4. High real-time performance: the wireless communication technology used by the method has small transmission time delay, can ensure that the acquired motion state information and the position information of the vehicle are always real-time, has low algorithm complexity and small time consumption, and can draw a conclusion as soon as possible.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a simulation diagram of a Kalman filter model with Gaussian random noise as the noise established in the embodiment;
fig. 3 is a registration flowchart of a mobile APP in an embodiment;
FIG. 4 is a flowchart illustrating the operation of the owner of the vehicle on the APP after the accident in the embodiment;
FIG. 5 is a flow chart of the operation of the traffic police on the APP after an accident in an embodiment;
FIG. 6 is a Kalman filtered data of the angular acceleration value of a certain vehicle in the embodiment.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Examples
As shown in fig. 1, the present application provides a traffic accident scene restoration method, which records a driving track of a vehicle in real time through a vehicle-mounted device, can automatically extract the driving track of the vehicle within one minute before a car accident occurs after the car accident occurs, generates an original driving track of the vehicle on the basis of the image, and finally outputs a final image. The method comprises the following steps:
s1, installing a pressure sensor and a tension sensor on a vehicle wheel, starting a vehicle-mounted GPS device, and drawing a driving track of the vehicle one minute before an accident occurs by using data collected by the GPS device and the sensor provided on the vehicle, where the data collected by the GPS device and the sensor is transmitted to a microprocessor unit in the vehicle in this embodiment, and the method specifically includes:
acquiring data such as longitude and latitude, vehicle speed, course angle and the like of a vehicle through a GPS device, and mapping the vehicle coordinate to a two-dimensional rectangular coordinate system by utilizing coordinate system conversion;
judging the vehicle traveling direction: when the vehicle turns in the advancing process, the tension force applied to the left surface and the right surface of the wheel is different due to the action of centripetal force, based on the principle, tension sensors can be arranged on the two sides of the wheel, and then the advancing direction of the vehicle is determined according to the corresponding relation between the difference value of the tension force on the two sides of the wheel and the turning direction and angle of the vehicle;
judging whether the vehicle is suddenly braked: the real-time speed of the vehicle can be reflected, the pressure between a front side tire and the ground can be increased when the vehicle is braked emergently under the normal condition, and the pressure can be obtained through a pressure sensor arranged on the wheel; the tire is slightly deformed under the action of pressure, so that the trace left on the ground by the wheel is wider than that of the wheel in normal running, and based on the principle, the change of the running trace of the wheel of the vehicle can be simulated in a computer through the acceleration during the braking of the vehicle;
drawing an image: and drawing the running track of the vehicle one minute before the accident through the values measured by the GPS device, the tension sensor and the pressure sensor.
And S2, obtaining a restored vehicle running track by applying a deep learning algorithm by taking the drawn vehicle running track one minute before the accident and data such as speed, acceleration and the like in a vehicle recorder at the moment of the accident as input.
Because the data collected by the sensors is accompanied by a lot of noise, the data must first be filtered using a Kalman (Kalman) filtering algorithm.
The Kalman filter estimates the process state by a feedback control method: the filter estimates the state of the process at a certain moment and then obtains feedback in the form of a (noisy) measured variable. The final Kalman estimation algorithm becomes a kind of pre-estimation-correction algorithm with numerical solution.
Kalman filter for estimating state variable x ∈ R of discrete time processn. The state equation of the discrete linear steady-state system is as follows:
xk=Axk-1+Buk-1+wk-1 (1)
defining an observed variable z ∈ RmAnd obtaining a measurement equation:
zk=Hxk+vk (2)
wherein wkAnd vkRepresenting process excitation noise and observation noise, respectively. Assuming that they are normally distributed white noises independent of each other, the Kalman filter estimates the next state mainly by equation (1), according to zkCorrecting the estimator:
Figure BDA0001985107110000041
Figure BDA0001985107110000042
wherein x ∈ Rn-represents prior and ^ represents estimation) is prior state estimation of the kth step under the condition that the prior state of the kth step is known; x is formed by RnFor a known measured variable zkEstimating the posterior state of the kth step; pkEstimating error covariance for the prior; pk -Estimating covariance for the posteriori; r is the variance of the observed noise.
Matlab is used for establishing a Kalman filter model with Gaussian random noise as noise for simulation, the effect is shown in figure 2, it can be seen that the measured data of the sensor can not be directly used for calculation in practical application, the deviation between the data result filtered by Kalman and the theoretical predicted value is not large, the precision requirement of calculation is met, and the filtering effect is good.
The deep learning algorithm adopted by the method is specifically a neural network algorithm. In the practical application scenario of the method, the controlled object is a running vehicle, if the parameters (coordinates) of the traveling track of the vehicle without an accident are regarded as the output of the neural network, and the parameters (the track before the accident, the speed, the acceleration, the course angle, the data of each sensor and the like) of the running state of the vehicle are regarded as the input of the neural network. The coordinates of each moment of the vehicle are determined by the coordinates of the previous moment and the operation state parameters, and a causal relationship exists between the moments, so that the neural network is not suitable for being built by a common full-connection layer neural network DNN, but is suitable for being built by a recurrent neural network RNN.
A multilayer feedback RNN (Recurrent neural Network) neural Network is an artificial neural Network with nodes directionally connected into a ring. The internal state of such a network may exhibit dynamic timing behavior. Unlike feed-forward neural networks, the RNN can use its internal memory to process an input sequence at an arbitrary timing, highlighting the logical relationship between time points and time points.
As known from the major forward and backward algorithms of RNN, a big problem faced by RNN is that there may be gradient vanishing or bursting for long span time. In this embodiment, a Long Short-Term Memory network (LSTM) is used to build the RNN network, so as to solve the above problem.
The LSTM is one in which the cell has its own state and can change its state by internal update, and the gate structure adds or deletes information to or from the state of the cell, and the gate is divided into an input gate, an output gate, and a forgetting gate. Through the LSTM, the time information of the input sequence can be guaranteed not to be lost, the situation of gradient explosion or disappearance under the long-span time condition of the RNN can be solved, and the robustness of the neural network is guaranteed.
The input and output data of the neural network are as follows:
the input data is divided into two parts: the running track parameters (coordinates) of one minute before the accident and various parameters (including position, speed, acceleration, course angle, yaw rate, angular acceleration, tire pressure and tire surface tension) of the filtered running state of the vehicle, which are obtained by real-time positioning through a GPS device, are in the form of a sequence consisting of digital streams and containing time logic. The output data is the predicted vehicle travel trajectory (coordinates) without an accident, and is also a series of sequences including time logic.
The training result of the neural network is as follows:
training uses a TensorFlow deep learning framework and GPU rendering + CUDA accelerated training under a Ubuntu 16.04LTS system. The training data sets are 2500 groups in total, each group uses Numpy to generate data to simulate various parameters in the vehicle running process, and matplotlib is used to simulate the vehicle running track within one minute before the accident happens and draw the track.
In this embodiment, after 1000 rounds of training are performed on the training data set, the accuracy of the test on 50 verification data sets with the same structure is as high as 98.3%. The random error can be greatly eliminated by 1000 groups of training data sets, and the accuracy of the deep learning method is proved by repeated verification of 50 different verification data sets.
In order to verify the feasibility of the method as much as possible under the conditions of ensuring safety and avoiding property loss, the obtained model is simulated under the framework of the OpenCv algorithm. The simulation result shows that the preset particle motion track and the prediction motion track at every moment calculated according to the deep learning network model almost have no deviation. The simulation well proves that the track predicted by the network can well approach the track of the vehicle without accidents under the real condition.
The restored track obtained by the method can be sent to a mobile terminal application program (APP) of a user:
1) APP main function design: there is the communication function between this APP and the car-mounted device. After an accident occurs, a user logs in the APP and clicks a key to operate, and a driving track of the vehicle before the accident occurs and an image simulated by the original driving track are obtained.
The car owner and the traffic police can log in the APP platform to perform related operations. The basic logical framework is: the vehicle owner enters the operation interface of the APP to upload the related data and the pictures, and then the background program uploads the data to the database. When the traffic police enters the operation interface to process the traffic accident, the background program calls the corresponding data from the database.
2) The use process comprises the following steps:
registering: when using this APP, user registration is first required. The user only needs to download the APP in the mobile phone APP store, click to register and operate according to the following operation flow, and fig. 3 is a registration flow chart.
The specific operation sequence is as follows: for car owners, the operation of the entire application after an accident is shown in fig. 4, and for traffic police, the operation of the entire application after an accident is shown in fig. 5.
In the present embodiment, it is assumed that there are two vehicles at the first vehicle (121.21348E,31.28772N) and the second vehicle (121.21352E,31.28771N), respectively. And the microprocessor unit performs Kalman filtering on the read sensor data and records the sensor data, and the GPS synchronizes the running position information of the vehicle in real time. FIG. 6 shows real-time readings of the angular acceleration of the vehicle over the horizon and Kalman filtered data, which is used as input data to the neural network along with the GPS recorded trajectory. And (3) assuming that the two vehicles collide with each other, quickly calculating according to input data by the neural network, drawing a preset running track of the two vehicles when no accident occurs after the calculation is completed, and finally obtaining that the first vehicle is in a straight-going state before the collision and the second vehicle turns right before the collision.

Claims (6)

1.一种交通事故现场还原方法,其特征在于,包括以下步骤:1. a traffic accident scene restoration method, is characterized in that, comprises the following steps: S1、利用设置在车上的GPS装置和传感器采集的数据绘制出事故发生前一分钟车辆的行驶轨迹;S1. Use the GPS device installed on the vehicle and the data collected by the sensor to draw the driving track of the vehicle one minute before the accident; S2、以绘制出的事故发生前一分钟车辆的行驶轨迹和发生事故一瞬间的车辆记录仪中的数据作为输入,运用深度学习算法得到还原的车辆行驶轨迹;S2. Using the plotted driving trajectory of the vehicle one minute before the accident and the data in the vehicle recorder at the moment of the accident as input, use the deep learning algorithm to obtain the restored vehicle driving trajectory; 所述步骤S1中通过在车轮的两侧安装的张力传感器的差值与车辆转弯方向及角度的对应关系,确定车辆的行进方向,通过车轮上安装的压力传感器确定车辆的紧急刹车情况;所述GPS装置和传感器采集的数据包括:车辆的位置、车速、加速度、航向角、胎压、轮胎表面张力、横摆角速度、角加速度;In the step S1, the traveling direction of the vehicle is determined by the corresponding relationship between the difference between the tension sensors installed on both sides of the wheel and the turning direction and angle of the vehicle, and the emergency braking situation of the vehicle is determined by the pressure sensor installed on the wheel; the The data collected by GPS devices and sensors include: vehicle position, vehicle speed, acceleration, heading angle, tire pressure, tire surface tension, yaw rate, and angular acceleration; 所述步骤S1中具体包括:The step S1 specifically includes: 判断车辆行进方向:获取车轮上两侧的张力传感器的数据,根据车轮两边张力的差值与车辆转弯方向及角度的对应关系,确定车辆的行进方向;Judging the traveling direction of the vehicle: obtain the data of the tension sensors on both sides of the wheel, and determine the traveling direction of the vehicle according to the corresponding relationship between the difference between the tension on both sides of the wheel and the turning direction and angle of the vehicle; 判断车辆是否急刹:获取车轮上压力传感器的数据,检测出前侧轮胎与地面之间的压力增大的时刻,然后通过车辆刹车时的加速度大小在计算机中模拟出车辆车轮行驶痕迹的变化;Judging whether the vehicle brakes suddenly: Obtain the data of the pressure sensor on the wheel, detect the moment when the pressure between the front tire and the ground increases, and then simulate the change of the vehicle wheel trace in the computer through the acceleration of the vehicle when braking; 绘制图像:通过GPS装置、张力传感器、压力传感器测量出的数值,绘制出事故发生前一分钟车辆的行驶轨迹。Drawing an image: Using the values measured by the GPS device, tension sensor, and pressure sensor, draw the vehicle's trajectory one minute before the accident. 2.根据权利要求1所述的一种交通事故现场还原方法,其特征在于,该方法中传感器采集的数据先经过卡尔曼滤波算法滤波再进行其他处理。2 . The method for restoration of a traffic accident scene according to claim 1 , wherein the data collected by the sensor in the method is filtered by a Kalman filter algorithm before other processing is performed. 3 . 3.根据权利要求1所述的一种交通事故现场还原方法,其特征在于,所述深度学习算法具体为神经网络算法。3 . The method for restoring a traffic accident scene according to claim 1 , wherein the deep learning algorithm is specifically a neural network algorithm. 4 . 4.根据权利要求3所述的一种交通事故现场还原方法,其特征在于,所述神经网络算法具体采用LSTM搭建循环神经网络。4 . The method for restoring a traffic accident scene according to claim 3 , wherein the neural network algorithm specifically adopts LSTM to build a recurrent neural network. 5 . 5.根据权利要求1所述的一种交通事故现场还原方法,其特征在于,所述还原的车辆行驶轨迹具体通过OpenCv算法仿真得到。5 . The method for restoring a traffic accident scene according to claim 1 , wherein, the restored vehicle driving track is obtained by simulation with an OpenCv algorithm. 6 . 6.根据权利要求1所述的一种交通事故现场还原方法,其特征在于,所述还原的车辆行驶轨迹被发送到交警或车主的移动终端。6 . The method for restoring a traffic accident scene according to claim 1 , wherein the restored vehicle travel trajectory is sent to a traffic policeman or a mobile terminal of a vehicle owner. 7 .
CN201910162408.1A 2019-03-05 2019-03-05 Traffic accident scene restoration method Expired - Fee Related CN110009903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910162408.1A CN110009903B (en) 2019-03-05 2019-03-05 Traffic accident scene restoration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910162408.1A CN110009903B (en) 2019-03-05 2019-03-05 Traffic accident scene restoration method

Publications (2)

Publication Number Publication Date
CN110009903A CN110009903A (en) 2019-07-12
CN110009903B true CN110009903B (en) 2022-02-18

Family

ID=67166479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910162408.1A Expired - Fee Related CN110009903B (en) 2019-03-05 2019-03-05 Traffic accident scene restoration method

Country Status (1)

Country Link
CN (1) CN110009903B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110782670A (en) * 2019-11-05 2020-02-11 北京汽车集团有限公司 Scene restoration method based on data fusion, vehicle cloud platform and storage medium
CN112863244B (en) * 2019-11-28 2023-03-14 大众汽车股份公司 Method and apparatus for promoting safe driving of a vehicle
CN111640308B (en) * 2020-04-24 2022-03-08 合肥湛达智能科技有限公司 Deep learning red light running detection method based on embedded terminal
CN111651829B (en) * 2020-06-12 2022-03-29 招商局重庆交通科研设计院有限公司 Road simulation platform based on intelligent traffic flow simulation
CN114510533B (en) * 2022-01-06 2023-03-17 北京中交兴路车联网科技有限公司 Accident recovery method and device, electronic equipment and storage medium
CN114373311B (en) * 2022-01-27 2023-06-06 支付宝(杭州)信息技术有限公司 Vehicle driving guiding method and device
TWI824788B (en) * 2022-10-21 2023-12-01 財團法人車輛研究測試中心 System and method of integrating traffic accident assistance investigation and safety of the intended functionality scene establishment
CN117912259B (en) * 2024-03-19 2024-06-21 中汽数据有限公司 Traffic accident reproduction method and device based on automobile electronic data, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975721A (en) * 2016-05-27 2016-09-28 大连楼兰科技股份有限公司 Accident recurrence collision simulation establishing method and accident recurrence collision simulation method based on vehicle real-time motion state
CN107291972A (en) * 2017-03-10 2017-10-24 清华大学 The Intelligent Vehicle Driving System efficiency evaluation method excavated based on multi-source data
CN108665757A (en) * 2018-03-29 2018-10-16 斑马网络技术有限公司 Simulating vehicle emergency system and its application
CN109272745A (en) * 2018-08-20 2019-01-25 浙江工业大学 Vehicle trajectory prediction method based on deep neural network

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2768092Y (en) * 2004-12-29 2006-03-29 何崇中 Work information recorder for automobile tyre
US7536247B2 (en) * 2005-11-30 2009-05-19 Ford Global Technologies Method and apparatus for reconstructing an accident using side satellite lateral acceleration sensors
US9067565B2 (en) * 2006-05-22 2015-06-30 Inthinc Technology Solutions, Inc. System and method for evaluating driver behavior
CN102073789B (en) * 2010-12-30 2013-03-20 长安大学 Analysis, calculation and simulative representation system for combined accident of vehicle-vehicle collision and vehicle-fixed object collision
CN102063573B (en) * 2010-12-31 2012-09-12 长安大学 Computer system for analysis, computation, simulation and reconstruction of car falling accident
US9296263B2 (en) * 2011-12-23 2016-03-29 Prasad Muthukumar Smart active tyre pressure optimising system
US20210256614A1 (en) * 2014-09-22 2021-08-19 State Farm Mutual Automobile Insurance Company Theft identification and insurance claim adjustment using drone data
EP3159853B1 (en) * 2015-10-23 2019-03-27 Harman International Industries, Incorporated Systems and methods for advanced driver assistance analytics
CN105956265A (en) * 2016-04-29 2016-09-21 大连楼兰科技股份有限公司 Method and system for simulation and reappearance of crash accident simulation with process-based simulation parameters
CN106897948B (en) * 2017-01-04 2021-01-01 北京工业大学 Riding and pushing traffic accident identification method
CN108791276B (en) * 2017-04-26 2020-05-19 湖南大学 A quick method for judging the linear/non-linear working state of tire lateral force

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105975721A (en) * 2016-05-27 2016-09-28 大连楼兰科技股份有限公司 Accident recurrence collision simulation establishing method and accident recurrence collision simulation method based on vehicle real-time motion state
CN107291972A (en) * 2017-03-10 2017-10-24 清华大学 The Intelligent Vehicle Driving System efficiency evaluation method excavated based on multi-source data
CN108665757A (en) * 2018-03-29 2018-10-16 斑马网络技术有限公司 Simulating vehicle emergency system and its application
CN109272745A (en) * 2018-08-20 2019-01-25 浙江工业大学 Vehicle trajectory prediction method based on deep neural network

Also Published As

Publication number Publication date
CN110009903A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110009903B (en) Traffic accident scene restoration method
CN110177374B (en) V2X functional application testing method, device and system based on vehicle-road cooperation
CN107169402B (en) vehicle lane positioning
CN109784254B (en) A method, device and electronic device for vehicle violation event detection
CN110796007B (en) Method and computing device for scene recognition
CN107491736A (en) A kind of pavement adhesion factor identifying method based on convolutional neural networks
CN114274972A (en) Scene recognition in an autonomous driving environment
CN116901975B (en) Vehicle-mounted AI security monitoring system and method thereof
CN111183464B (en) System and method for estimating saturated flow at signalized intersections based on vehicle trajectory data
US20210042642A1 (en) Driving action evaluating device, driving action evaluating method, and recording medium storing driving action evaluating program
CN113126624B (en) Automatic driving simulation test method, device, electronic equipment and medium
CN115879294B (en) A method and system for generating full-sample vehicle flow trajectories based on multi-vehicle environment perception
JP2020096286A (en) Determination device, determination program, determination method, and neural network model generation method
CN113183988B (en) Method, device and equipment for supervising automatic driving of vehicle and storage medium
CN112419345B (en) High-precision tracking method for patrol cars based on echo state network
CN114970805A (en) Method for operating a motor vehicle
CN113092135A (en) Test method, device and equipment for automatically driving vehicle
CN112434782A (en) Architecture and method for state estimation fault detection using crowdsourcing and deep learning
EP4220581A1 (en) Estimation of risk exposure for autonomous vehicles
EP4219261A1 (en) Estimation of risk exposure for autonomous vehicles
CN113848073A (en) A driving assistance method and system for predicting the probability of collision between people and vehicles
CN114968187A (en) Platform for perception system development of an autopilot system
CN114968189A (en) Platform for perception system development of an autopilot system
CN114037969A (en) Automatic driving lane information detection method based on radar point cloud and image fusion
CN113795010A (en) Method and device for testing automobile queue, electronic equipment and storage medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220218

CF01 Termination of patent right due to non-payment of annual fee