GB2562049A - Improved pedestrian prediction by using enhanced map data in automated vehicles - Google Patents
Improved pedestrian prediction by using enhanced map data in automated vehicles Download PDFInfo
- Publication number
- GB2562049A GB2562049A GB1706922.0A GB201706922A GB2562049A GB 2562049 A GB2562049 A GB 2562049A GB 201706922 A GB201706922 A GB 201706922A GB 2562049 A GB2562049 A GB 2562049A
- Authority
- GB
- United Kingdom
- Prior art keywords
- vehicle
- information
- road users
- prediction
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000033001 locomotion Effects 0.000 claims abstract description 88
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000012545 processing Methods 0.000 claims abstract description 14
- 230000008569 process Effects 0.000 claims abstract description 12
- 230000003068 static effect Effects 0.000 claims abstract description 6
- 238000004891 communication Methods 0.000 claims description 41
- 230000008447 perception Effects 0.000 claims description 14
- 238000011002 quantification Methods 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 5
- 230000001133 acceleration Effects 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 2
- 238000003909 pattern recognition Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 1
- 230000002708 enhancing effect Effects 0.000 claims 1
- 238000005259 measurement Methods 0.000 claims 1
- 238000002604 ultrasonography Methods 0.000 claims 1
- 238000013459 approach Methods 0.000 description 13
- 238000001514 detection method Methods 0.000 description 11
- 230000006399 behavior Effects 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 239000008186 active pharmaceutical agent Substances 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000010365 information processing Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000010391 action planning Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000009472 formulation Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000037361 pathway Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 230000002079 cooperative effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000037023 motor activity Effects 0.000 description 1
- 238000012959 renal replacement therapy Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000010998 test method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0287—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
- G05D1/0289—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling with means for avoiding collisions between vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096725—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096766—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission
- G08G1/096791—Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission where the origin of the information is another vehicle
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/161—Decentralised systems, e.g. inter-vehicle communication
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/164—Centralised systems, e.g. external to vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Aviation & Aerospace Engineering (AREA)
- Traffic Control Systems (AREA)
Abstract
A method of predicting a road users movements e.g. pedestrians, via enhanced map data to influence the motion of a vehicle. Comprising the determination of the vehicles position and orientation, and positional information of other road users using the vehicles on-board sensors; optionally obtaining positional information of other road users in the vicinity and along the planned route of the vehicle from external sources e.g. traffic cameras/real-time information, smartphones; the prediction of possible movements of each surrounding vehicle / pedestrian based on pre-computed statistical map information relating to the position and type of road user. A subset of predicted possible movements is used to quantify uncertainty, and the predicted obstacle positions with confidence levels are used to influence the vehicle motion planning such as the cars speed and direction or provide a warning. The process prevents the potential collision between the vehicle which may be autonomous, and other road users. The process may be completed on board the vehicle with external information sent wirelessly to the vehicle, or all or part of the data processing may be decentralised via a cloud service. Preferably the static environment is also considered, and obtained from on board vehicle sensors and external sources.
Description
(71) Applicant(s):
Kompetenzzentrum-Das Virtuelle Fahrzeug (Incorporated in Austria)
Forschungsgesellschaft mbH, Inffeldgasse 21/A/1, Graz 8010, Austria (72) Inventor(s):
Michael Hartmann Michael Stolz Daniel Watzenig (56) Documents Cited:
US 9612123 A1 US 20170016740 A1 US 20150338497 A1 US 20080097699 A1
US 8457827 A1 US 20160363935 A1 US 20100100324 A1 (58) Field of Search:
INT CL B60W, G05D, G08G Other: WPI, EPODOC, INETERNET (74) Agent and/or Address for Service:
Kompetenzzentrum-Das Virtuelle Fahrzeug Forschungsgesellschaft mbH, Inffeldgasse 21/A/1, Graz 8010, Austria (54) Title of the Invention: Improved pedestrian prediction by using enhanced map data in automated vehicles Abstract Title: A method of predicting a road users movements in order to influence the motion plan of a vehicle (57) A method of predicting a road user’s movements e.g. pedestrians, via enhanced map data to influence the motion of a vehicle. Comprising the determination of the vehicle’s position and orientation, and positional information of other road users using the vehicle’s onboard sensors; optionally obtaining positional information of other road users in the vicinity and along the planned route of the vehicle from external sources e.g. traffic cameras/real-time information, smartphones; the prediction of possible movements of each surrounding vehicle I pedestrian based on pre-computed statistical map information relating to the position and type of road user. A subset of predicted possible movements is used to quantify uncertainty, and the predicted obstacle positions with confidence levels are used to influence the vehicle motion planning such as the car’s speed and direction or provide a warning. The process prevents the potential collision between the vehicle which may be autonomous, and other road users. The process may be completed on board the vehicle with external information sent wirelessly to the vehicle, or all or part of the data processing may be decentralised via a cloud service. Preferably the static environment is also considered, and obtained from on board vehicle sensors and external sources.
Fig. 5
At least one drawing originally filed was informal and the print reproduced here is taken from a later filed formal copy.
1/5
Figures
Fig. 1
07 18
2/5
Fig. 2
Motion Planning level: Communication Structure
Motion Planning Level: Information Processing Γ~ΰ if........m........Ϊ......m.......1—| (Deterministic) Control Level
07 18
3/5
Fig. 3
07 18
4/5
Fig. 4
07 18
5/5
Fig. 5
Step 1
07 18
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
Application No. GB1706922.0
RTM
Date :30 October 2017
Intellectual
Property
Office
The following terms are registered trade marks and should be read as such wherever they occur in this document:
WAZE
HERE
OpenStreetMap
Mapillary
TomTom
Google
AStar
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
Improved pedestrian prediction by using enhanced map data in automated vehicles
BACKGROUND OF INVENTION
In everyday life, people often participate in the traffic without being aware of it. Usually we take it as understandable to reach a target safely. Either as a driver, cyclist or pedestrian we change our role in the nature of participation. But in reality there are still too many fatalities on the roads of Europe and the world. Technical innovations in the field of automated driving functions have steadily reduced the number of fatalities. Nevertheless there are still many problems and open questions for automated driving. Especially for situations in complex environments (e.g. cities) with many different road users (e.g. pedestrians, bicycles, animals ...) there many complex situations for the motion planning algorithm. For this, a new process is presented for dynamic network based collision avoidance system which usage for cloud services and novel human movement prediction algorithms.
Autonomous vehicles should...
• React and drive like a human • Make correct decisions • React appropriately in various situations (e.g. reactive) • Drive safely and efficient also in uncertain and dynamic environments • Therefore a technical process is invented for...
• Network-based-Navigation and a new motion planning approach with usage of cloud services • Cloud services with intelligent sensor networks • Human movement prediction algorithms in ego-vehicle and cloud • Spatial subdivision method for cities (e.g. for the analyses of spatial contexts and usage of current motion planning • and prediction algorithms) • Predict the time, when the road user is detected • Decoding of spatial dependencies • Selection safe trajectories and an adequate selection unit • Control strategy for the vehicle • Consideration of the context, the spatial environment and (a-)typical movements in the prediction of human movements
The technical effect of the invention is the increase of safety for (autonomous) vehicles (Level 3 and Level 4) in complex, uncertain, dynamic environments (e.g. urban environments), where important knowledge for motion planning is missing. It is an effective safety concept for vulnerable road users with collision avoidance and motion planning assistance especially for urban inner city areas.
IMPORTANT DEFINITIONS
Following definitions can be interpreted as a help for the understanding. There is no warranty on completeness.
City Graph (compare: Block W: World): A mathematical description of the road network with nodes and edges (e.g. streets). 1
Open Street Map: Geodata with open access. Development by huge web-community.
Motion planning: Search of future trajectories for the ego-vehicle depending on the believed (future) time state space.
(Future) time state space: Mathematical description of future state space depending on the predictions. Necessary for collision avoidance. Depending on the uncertainty there are several deterministic state space, belief state space, plausible state space.
Uncertainty Quantification (compare: Block M: Uncertainty Quantification/Predictive Time-State-Space with confidence levels): In safety related applications there are some methods to quantify the uncertainty. The uncertainty representation and propagation can be changed.
Autonomous Mode (compare: Block T A: Period A and Block T E: Period E): Autonomous mode means in this document that the ego-vehicle is equipped with on-board sensors and processing units to enable a self-driving mode without external sensors from the infrastructure. Information from external resources offers the possibility to drive less conservative driving trajectories.
Situation prediction (compare: Block 11...In: Machine Perception Units (e.g. different configurations)): Besides the prediction of the (human-) movement (e.g. positions), there are more aspects which can be incorporated in the prediction. Semantic information, personal internal stance or environment aspects can be incorporated.
Cloud service (compare: Block E: Server (e.g. Cloud service)): A cloud service, which assists the egovehicle in following aspects: traffic flow coordination, navigation, motion planning, situation recognition and -prediction. It is assumed there are many sensor networks. For safety reasons several servers to achieve redundancy are presumed. Therefore it is also possible that the ego-vehicle can communicate to multiple sources.
Ego-vehicle: The ego-vehicle, which can drive in an autonomous mode, consists of Block A:
(Autonomous) Vehicle(s), Block B: On-Car communication units for communication with the cloud service, Block C: Processing and Navigation Unit and Block D: On-Board sensors and perception units
Predicted Time-State Space: The predicted time-state space is necessary for motion planning of a robotic system. Therefore predictions where the obstacles will move in the future will lead to the predicted time-state space.
STATE OF THE ART
Criticism for existing test procedures with virtual environments and/or robots are:
• consideration of a static environment • predefined trajectories of pedestrians • no interaction • testing highly influenced by the paradigm of testing the vehicle control of a deterministic vehicle • not adequate and realistic for real world scenarios (e.g. cities) and the safety verification of Level 3 or Level 4 autonomous vehicles
Map data and databases
There are different (online) map data services available. Online-map actualization (e.g. Waze [2]), open source projects. Meanwhile, there are some companies that are specialized in spatial data, for example HERE [3], [4] and TomTom [5]. Also on the format OpenStreetMap maps are available for research purposes [6]. There are also new approaches, databases and technologies for the human movement detection (e.g. Mapillary [7], Placemeter [8]). There are also some current 3D virtual environments available (e.g. 3D-0SM [9]) and new web services (e.g. bostonography.com [10], geOps [11], [12]). With [13] google infrastructure or via other commercial APIs (e.g. [4]) it is possible to use map APIs (e.g. APIs for geocoding, places, map) with several information ( [14]). An example is shown in [15], how it is possible to use Google API for tracking applications with smartphones.
It is shown, that is possible to communicate between autonomous vehicles and pedestrians via a communication network [16]. Current survey from human movement detection [17] and technology [18]. A current pedestrian detection system for driver assistance with off board and onboard sensing units is presented in [19]. A pedestrian system with
Onboard systems is presented in [20]
Human Movement Prediction
In [21] a study about the state of the art for movement prediction algorithms is presented. In [22] the growing hidden markov models are presented, which incremental learn new behaviors. The [23] offers some new principles from a statistical inference perspective, where causal dependencies are incorporated in the movement prediction of pedestrians. In [24] Gaussian Processes are used, where spatial dependencies can be analyzed.
Motion Planning
Surveys for motion planning can be found in [25], [26], [27] [28] to get an overview of the state of the art. There are two current approaches which are promising for motion planning. Optimization based and sampling-based motion planning algorithms.
Sampling-based motion planning
For sampling approaches rapidly exploring random trees are the most famous approaches and they build a graph with different variants of exploration of the state space. For non-holonomic systems kinodynamic versions are used [29] [30] [31]. RRTs and variants can be found in automotive path planning [32] [33]. These can be used for realtime applications, but don't have redundant pathways. Redundant pathways could be advantageous for dynamic environments with moving objects, but costly for the computation. In this document a compromise in sense of optimality is presented. Motion planning problems in high-dimensional state spaces is known to be PSPACE-hard [33]. Probabilistic roadmaps (PRM) and rapidly- exploring random trees (RRT) are incremental sampling-based planners. Motion planning problems in high-dimensional state spaces is known to be PSPACE-hard [33]. Probabilistic roadmaps (PRM) and rapidly-exploring random trees (RRT) are incremental sampling-based planners.
Optimization-based motion planning
In [34], [35], [36], [37] and [38] mixed integer linear programming algorithms are used for motion planning algorithms. Mixed-Integer Linear Programming can be used as a MPC formulation [38] and are promising because they incorporate binary variables for logical expressions.
Inventions in ADAS and autonomous vehicles
In [39] an automated movement of a vehicle is described, especially in a fixed environment (e.g. park, factory). The surrounding road users are detected with external sensors. In [39] a semi-Autonomous movements of the ego-vehicle with detection of environments of the vehicle by outdoor-sensor and application for park assistant or robots in industry. In [40] the prediction of preceding vehicles is done with an adaption of the perception module (region of interest) with data fusion. Effect on the adaption of the velocity and steering angle, to assist the driver is the result. In [41] the prediction of traffic participant is done, consisting of a system with a localization unit for movable objects. The collision avoidance: prediction of collisions and warnings is done with cooperative sensors (active or passive RFID transponder) for pedestrian detection (not detection of hidden objects with cameras) and classification of the object. In [42] a determination of a driving strategy is done with prediction of movements and evaluation of environment data and modelling the virtual driver with artificial intelligence In [43] the prediction of the region of movement, situation classification of normality of movement and selection of movement models for prediction In [44] a collision Avoidance system is introduced to bring the vehicle to a safe state with adequate and automated steering and acceleration. Modules with prediction of trajectories of moving objects, warning of the driver, estimation of the risk of collision and building of a Collision-State-map, trying of different acceleration/steering combinations to bring the vehicle to safe driving state and use of hypothetical trajectories In [45] an digital map of a parking area is used with a Car2X- communication network, so that the position data of mobile objects are detected. This information is used for navigation to a target position with collision avoidance. In [46] a process for collision avoidance and automated configuration of working area of a robot discussed. In [47] a classification of type of object (e.g. bicycle, pedestrian) and a classification and prediction of behavior is presented. Features are adaption and correction of characteristic values and motion planning depending on predictions. In [48] a probabilistic situation analysis is presented for the fusion of Situation Analysis to trigger safety systems. Application is for pre-crash system. In [49] a prediction procedure for trajectories for collision avoidance and the control of velocity is presented. [50] A visual pedestrian detection is described with extraction of a partial image and processing unit with prediction of human behavior. In [51] a communication based vehicle-pedestrian collision warning system with pedestrian detection, prediction of moving objects and ego-vehicle and path collision circuit for detection of collisions is presented. In [52] a communication based vehicle-pedestrian collision warning system is presented. The system includes a base and a mast and a plurality of sensors. The prediction of moving objects and egovehicle and a path collision circuit for detection of collisions is described. In [53] a crowd movement prediction using optical flow algorithms is presented with a predictive map of a distribution of objects of interest (OOls). In [54] a computer vision approach for collision avoidance for pedestrians and analysis of the optical flow is presented. In [55] a computer vision approach for estimation of Time to collision (TTC) is presented with use of a plurality of images. In [56], [57] systems for object detection are presented for the usage in autonomous vehicles.
Interacting vehicles
In [58] a new research program is initiated by the German research program for cooperative interacting vehicles. In [59] many aspects about cooperative and interaction based are analyzed for safety reasons.
In [60] a cloud based system for autonomous vehicles is described to assist the internal navigation and motion planning with information from the cloud. In [61] a start-up for optimization of a fleet of autonomous vehicles via a cloud.
REFERENCES [1] Iteam, https://iteam-project.net/, accessed: 2017-03-21.
[2] Waze, waze.com, accessed: 2017-03-14.
[3] wego.here.com, wego.here.com, accessed: 2017-03-14.
[4] here.com, here.com, accessed: 2017-03-14.
[5] tomtom.com, tomtom.com, accessed: 2017-03-14.
[6] H. Winner, S. Hakuli, and G. Wolf, Handbuch Fahrerassistenzsysteme: Grundlagen, Komponenten und Systeme fur aktive Sicherheit und Komfort.
Springer-Verlag, 2011.
[7] Mapillary, mapillary.com, accessed: 2017-03-14.
[8] Placemeter, placemeter.com/, accessed: 2017-03-14.
[9] osm3d, osm-3d.org, accessed: 2017-03-14.
[10] Bostonography, bostonography.com/, accessed: 2017-03-14.
[11] GeOps, geops.de/, accessed: 2017-03-14.
[12] GeOps, tracker.geops.ch, accessed: 2017-03-14.
[13] Devgoogle, developers.google.com/, accessed: 2017-03-14.
[14] Socialapis, https://www.programmableweb.com/news/top-10-social-apis-facebook-twitter-andgoogle-plus/analysis/2015/02/17, accessed: 2017-03-21.
[15] T. Jeske, Sicherheit und Datenschutz in nicht-interaktiven crowdsourcing Szenarien, Ph.D. dissertation, 2015.
[16] C. P. Urmson, I. J. Mahon, D. A. Dolgov, and J. Zhu, Pedestrian notifications, Nov. 24 2015, US Patent 9,196,164.
[17] N. A. Ogale, A survey of techniques for human detection from video, Survey, University of Maryland, vol. 125, no. 133, p. 19, 2006.
[18] xens, xsens.com, accessed: 2017-03-14.
[19] P. Borges, R. Zlot, and A. Tews, Pedestrian detection for driver assist and autonomous vehicle operation using offboard and onboard sensing, in Australian Conference on Robotics and Automation (ACRA), 2010, pp. 1-6.
[20] M. Enzweiler and D. M. Gavrila, Monocular pedestrian detection: Survey and experiments, IEEE transactions on pattern analysis and machine intelligence, vol. 31, no. 12, pp. 2179-2195, 2009.
[21] S. Lef'evre, D. Vasquez, and C. Laugier, A survey on motion prediction and risk assessment for intelligent vehicles, Robomech Journal, vol. Ι,ηο. 1, p. 1, 2014.
[22] A. D. V. Govea, Incremental learning for motion prediction of pedestrians and vehicles, Ph.D. dissertation, 2010.
[23] B. D. Ziebart, Modeling purposeful adaptive behavior with the principle of maximum causal entropy, Ph.D. dissertation, 2010.
[24] D. Ellis, E. Sommerlade, and I. Reid, Modelling pedestrian trajectory patterns with Gaussian processes, in Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 1229-1234.
[25] C. Goerzen, Z. Kong, and B. Mettler, A survey of motion planning algorithms from the perspective of autonomous uav guidance, in Selected papers from the 2nd International Symposium on UAVs, Reno, Nevada, USA June 8-10, 2009. Springer, 2009, pp. 65-100.
[26] B. Paden, M. vC'ap, S. Z. Yong, D. Yershov, and E. Frazzoli, A survey of motion planning and control techniques for self-driving urban vehicles, IEEE Transactions on Intelligent Vehicles, vol. 1, no. 1, pp. 3355, 2016.
[27] C. Katrakazas, M. Quddus, W.-H. Chen, and L. Deka, Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions, Transportation Research Part C: Emerging Technologies, vol. 60, pp. 416-442, 2015.
[28] D. Gonz'alez, J. P'erez, V. Milan'es, and F. Nashashibi, A review of motion planning techniques for automated vehicles, IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 11351145, 2016.
[29] J. Choi, Kinodynamic motion planning for autonomous vehicles, International Journal of Advanced Robotic Systems, vol. 11, no. 6, p. 90, 2014.
[30] A. Perez, R. Platt, G. Konidaris, L. Kaelbling, and T. Lozano-Perez, Lqr-rrt*: Optimal sampling-based motion planning with automatically derived extension heuristics, in Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012, pp. 2537-2542.
[31] S. Karaman and E. Frazzoli, Optimal kinodynamic motion planning using incremental samplingbased methods, in Decision and Control (CDC), 2010 49th IEEE Conference on. IEEE, 2010, pp. 76817687.
[32] U. Schwesinger, M. Rufli, P. Furgale, and R. Siegwart, A sampling-based partial motion planning framework for system-compliant navigation along a reference path, in Intelligent Vehicles Symposium (IV), 2013 IEEE. IEEE, 2013, pp. 391-396.
[33] D. J. Webb and J. v. d. Berg, Kinodynamic rrt*: Optimal motion planning for systems with linear differential constraints, arXiv preprint arXiv:1205.5088, 2012.
[34] T. Schouwenaars, Έ. F'eron, and J. How, Safe receding horizon path planning for autonomous vehicles, in PROCEEDINGS OF THE ANNUAL ALLERTON CONFERENCE ON COMMUNICATION CONTROL AND COMPUTING, vol. 40, no. 1. The University; 1998, 2002, pp. 295-304.
[35] A. Richards, T. Schouwenaars, J. P. How, and E. Feron, Spacecraft trajectory planning with avoidance constraints using mixed-integer linear programming, Journal of Guidance, Control, and Dynamics, vol. 25, no. 4, pp. 755-764, 2002.
[36] T. Schouwenaars, B. De Moor, E. Feron, and J. How, Mixed integer programming for multi-vehicle path planning, in Control Conference (ECC), 2001 European. IEEE, 2001, pp. 2603-2608.
[37] T. Schouwenaars, Safe trajectory planning of autonomous vehicles, Ph.D. dissertation, Massachusetts Institute of Technology, 2005.
[38] J. Eilbrecht and O. Stursberg, Auction-based cooperation of autonomous vehicles using mixed integer planning, AAET-Automatisiertes und vernetztes Fahren 2017, pp. 266-285, 2017.
[39] A. Augst and C. Patron, Verfahren zur Ausfuhrung einer zumindest teilweise automatisierten bewegung eines Fahrzeugs innerhalb eines raumlich begrenzten Bereichs, Patent DE 10 2014 218 429 Al, 03 17, 2016. [Online], Available:
https://register.dpma. de/DPMAregister/pat/register?AKZ=1020142184290 [40] R. Kastner, M. Kleinehagenbrock, M. Nishigaki, H. Kamiya, N. Mori, S. Wako-shi, and Kusuhara, Driver assist system with cut-in prediction, Patent DE 10 2015 200 215 Al, 07 28, 2016.
[41] S. Zecha and R. R. Helmar, Verfahren und vorrichtung zur prdiktion der position und/oder bewegung eines objekts relativ zu einem Fahrzeug, Patent DE 10 2009 035 072 Al, 07 28, 2009.
[42] . W. D. F. M. D. . N. D. G. R. . L. D. H. C.. L. D. H. S.. W. D. I. K. . N. D. K. S. D. . L. D. Fechner, Thomas, Verfahren zum Bestimmen einer Fahrstrategie, Patent DE 10 2014 216 257 Al, 02 18, 2016.
[43] K. Sakai, T. Kindo, and M. Harada, Vorrichtung zum Vorhersagen der Bewegung eines mobilen Korpers, Patent DE 11 2010 000 802 T5, 02 12, 2010.
[44] J. Chassot, G. Ottmar, H. Frederic, S. Paasche, A. Schwarzhaupt, G. Speigelberg, and A. Sulzmann, Verfahren und system zur Vermeidungeiner Kollision eines Kraftfahrzeuges mit einem Objekt, Patent DE 10 2005 023 832 Al, 11 30, 2006.
[45] S. Nordbruch, Verfahren und Vorrichtung zum Betreiben eines Fahrzeugs respektive eines Parkplatzes, Patent DE 10 2014 224 104 Al, 11
26, 2016.
[46] E.-H. Waled, Verfahren und Vorrichtung zum Vermeiden von Kollisionen zwischen
Industrierobotern und deren objekten, Patent DE 10226 140 Al, 06 13, 2004.
[47] K. Taguchi, Vorrichtung zur Vorhersage eines Verhaltens, Patent DE 11 2008 002 268 T5, 07 15, 2010.
[48] R. Η. M. S. R. H. Meinecke, Marc-Michael, Probabilistische auslsestrategie, Patent DE 10 2008 046488 Al, 03 11,2010.
[49] n.d., Verfahren und vorrichtung zum prdizieren einer bewegungstrajektorie, Patent DE 10 2006036 363 Al, 04 05, 2007.
[50] T. Kindo, Pedestrian action prediction device and pedestrian action prediction method, Patent EP2 759 998 Al, 09 20, 2011.
[51] L. Caminiti, J. C. Lovell, J. J. Richardson, and C. T. Higgins, Communication based vehicle-pedestrian collision warning system, Dec. 2
2014, US Patent 8,903,640.
[52] L. Caminiti, J. C. Lovell, and J. J. Richardson, Communication based vehicle-pedestrian collision warning system, Mar. 12 2009, US Patent App. 12/403,067.
[53] T. N. Dos Santos, R. C. Folco, and B. H. Leitao, Crowd movement prediction using optical flow algorithm, May 15 2013, uS Patent App. 13/894,458.
[54] D. Rosenbaum, A. Gurman, Y. Samet, G. P. Stein, and D. Aloni, Pedestrian collision warning system, Dec. 29 2015, US Patent App. 14/982,198.
[55] G. Stein, E. Dagan, O. Mano, and A. Shashua, Collision warning system, Jun. 29 2015, US Patent App. 14/753,762.
[56] J. Zhu, M. S. Montemerlo, C. P. Urmson, and A. Chatham, Object detection and classification for autonomous vehicles, Jun. 5 2012, US Patent 8,195,394.
[57] --, Object detection and classification for autonomous vehicles, Oct. 28 2014, US Patent
8,874,372.
[58] C. Stiller, W. Burgard, B. Demi, L. Eckstein, F. Flemisch, F. K oster, M. Maurer, and G. Wanielik, Kooperativ interagierende Automobile.
[59] M. Naumann, P. F. Orzechowski, C. Burger, 0. S^. Tas,, and C. Stiller, Herausforderungen fur die Verhaltensplanung kooperativer automatischer Fahrzeuge.
[60] S. Kumar, S. Gollakota, and D. Katabi, A cloud-assisted design for autonomous driving, in Proceedings of the first edition of the MCC workshop on Mobile cloud computing. ACM, 2012, pp. 41-46.
[61] bestmile, https://bestmile.com/, accessed: 2017-03-21.
[62] Itenach, http://www.lte-anbieter.info/5g/, accessed: 2017-03-22.
DESCRIPTION OF THE INVENTION
Legend • Navigation Level: Communication Structure o Block W: World is very simplified a spatio-temporal space with dynamic and static subjects and objects. Consideration of many philosophical, psychological, historical, cultural, technological and physical aspects: laws of nature, different socio cultural environments, time-variant, and nature matter, determinism and causality. Motion planning in reality is very complex because of the uncertainty, the dynamic and the missing knowledge about the causality.
o Block A: (Autonomous) Vehicle(s) with Block B: On-Car communication units for communication with the cloud service , Block C: Processing and Navigation Unit and Block D: On-Board sensors and perception units o Block A: (Autonomous) Vehicle(s) Nonholonomic dynamic system with differential constraints o Block B: On-Car communication units for communication with the cloud service o Block C: Processing and Navigation Unit Intelligent map, communication- and processing units o Block D: On-Board sensors and perception units On-Board Perception Units (e.g. Laser, (Long-Range-)Radar, LiDAR,...) o Block E: Server (e.g. Cloud service) Motion Planning Level: Communication Structure o Block A: (Autonomous) Vehicle(s) with Block B: On-Car communication units for communication with the cloud service , Block C: Processing and Navigation Unit and Block D: On-Board sensors and perception units o Block E: Server (e.g. Cloud service): Intelligent cloud service with communication unit: Data Communication with the Block B: On-Car communication units for communication with the cloud service and Block F: Intelligent Sensor Network and Situation data bases o Block F: Intelligent Sensor Network and Situation data bases: Technological development. The different types of devices where the position of human behavior is increasing (e.g. smartphone, tablet, computer devices). Perception of global environments.
o Block G: Subset of World (small Infrastructure, e.g. polyhedron): There are different strategies to subdivide the Block W: World. Optimization based net-strategies, graph-based approaches (e.g. topology control), and grid-based approaches with different cell-structures. Incorporation of the city-graph is possible.
o Block H: Road Users (e.g. pedestrians, vehicles, animals...): Humans participating in traffic. Movement behavior is influenced by sensory organs, their information processing of subconscious and awareness, actions (e.g. human motor activity), determinism or free will, risk behavior, time-variant intentions, socio-cultural background, emotions, I and you background, interaction with the dynamic Block W: World , traffic rules etc. some statistical dependencies are computational usable for predictions, if the causality of the environment is known • Motion planning level: Communication Structure o Block 11...In: Machine Perception Units (e.g. different configurations): There are different processing units for machine perception. Especially different assumptions are assumed. Normally there are different sensors available on the vehicle, which signals are preprocessed. There are also different cloud service sensor data perceived.
o Block Jl...Jn: Situation Recognition Units: There are different situation classification techniques used to understand the current situation. For uncertainty quantification there is some variation of the situations assumed.
o Block ΚΙ.,.Κη: Situation Prediction Units: Different methods for situation prediction, especially for movement prediction are used to find obstacle configurations in the future time-state space. Different environmental settings and multimodal movements can be considered. Physics-, maneuver-, interaction based situation prediction.
o Block L: Information Fusion: The different predictions are merged.
o Block M: Uncertainty Quantification/Predictive Time-State-Space with confidence levels: For the future state space configurations confidence levels about the predictions should be computed. There are some unconventional measures for uncertainty be used, which increase the confidence levels.
(Deterministic-)Control Level o Block N: Action Planning: The future actions are computed in a high level setting, where the future confidence level sets are used.
o Block O: (Kinodynamic-) Motion Planning: Eventually consideration of the vehicle dynamics.
: Future reference trajectories are computed, which are used for the control unit. The confidence level sets are the basis for the decision.
o Block P: Control Unit: A control rule with state recirculation is used to compute the actuator input signals to control the vehicle.
Time Periods o BlockT A: Period A : Recognition to need to change of autonomous mode o Block T B: Period B : Query o Block T C: Period C : Cloud-based-Processing o Block T D: Period D : Reply-Message o Block T E: Period E : Reconfiguration of Autonomous mode
Division of the Block W: World
The Block W: World is divided in set of cells with structure. Different strategies can be used:
• grid based approach (e.g. quadrature, polyhedrons) • optimization based approach • topology control (e.g. XTC) • graph based approach
The idea of subdividing the Block W: World is to make a natural division between navigation and motion planning. The motion planning is done reconfigured in the traverse to a new cell. The navigation can be easily be done with e.g. graph based search algorithms (e.g. AStar, Dijkstra and more). The subdivision has several advantages:
• complex area can be divided in less difficult areas • New coding perspectives. Instead of position cell IDs can be used.
• Advantage for navigation algorithms • Triggering the communication transfer between the cloud and ego-vehicle • Separation of navigation and motion planning • Easy Formulation with Mixed-Integer Linear Programming
Communication procedure
Figure 1 shows a flow-chart of the invention. The whole process can be divided in five time periods illustrated with Block T A: Period A, Block T B: Period B, Block T C: Period C, Block T D: Period D and Block T E: Period E. In time period Block T A: Period A the ego-vehicle follows an Autonomous Mode. It is recognized that the motion planning has to adapt the motion planning (e.g. traverse a new polyhedron). Therefore the local perception and prediction information is send to the cloud service with a query message for adaption of the motion planning in Block T B: Period B. The cloud service receives the query message Block T C: Period C and starts to edit the request. It has connection to different intelligent sensor networks with prediction units. The advantage is that range of observed areas increases the possibility to drive riskier driving maneuvers. Different spatial knowledge resources can be used for the observation of movements of dynamic obstacles (e.g. pedestrians and vehicles). Important information for the adaption of the autonomous mode is sent back to the vehicle Block T D: Period D. This information can be...
• future trajectories • future situation predictions • spatial relevant updates and meanings for the movement prediction • parameters for the prediction • preferences or advices for the future handling the future situation • The problem of unknown risks is also handled • semantic, cultural aspects • situation specific or location-typical information (e.g. street party) • pattern recognition information (e.g. manifold learning parameters) • approximation parameters • learned policy for motion planning with machine learning approaches • manifold learning for movement behavior prediction
In Block T E: Period E the autonomous mode is adapted depending on the received information.
Communication Structure and Vehicle Processing Units
Navigation level: Communication structure
It is possible to incorporate existing route searching algorithms or new internet communications like the LTE successor [62]. Figure 2 shows the concept from four different perspectives. In the upper left picture the navigation communication structure is illustrated. The vehicle has a directed communication (e.g. 5G standard with low latency) to the Block E: Server (e.g. Cloud service) via Block B: On-Car communication units for communication with the cloud service for the new concept for automated driving. The concept is based on the following
Motion planning level: communication structure
Block Kl...Kn: Situation Prediction Units:
• Gaussian Processes • Hidden Markov Models • Gaussian Mixture Models • Bayes-Filter • Manifold Learning
Motion planning level: Information processing
There are different sources of Block 11...In: Machine Perception Units (e.g. different configurations), Block J1.. Jn: Situation Recognition Units and Block Kl...Kn: Situation Prediction Units used to bring them together in a Block L: Information Fusion. Depending on the kind of representation of uncertainty there each unit Block 11...In: Machine Perception Units (e.g. different configurations), Block Jl...Jn: Situation Recognition Units and Block Kl...Kn: Situation Prediction Units can be doubled. This might be usable for the Block M: Uncertainty Quantification/Predictive Time-State-Space with confidence levels to generate confidence levels for Block M: Uncertainty Quantification/Predictive Time-State-Space with confidence levels. For the control perspective it is useful to have a deterministic problem. In new environments this is often not the case and therefore uncertainty quantification is a suitable method for a kind ofconversion of a stochastic problem to a deterministic problem.
(Deterministic) Control Level
With the confidence levels of the predicted time-state space, the stochastic problem of motion planning can be handled in a classical manner. There is a direct flow from Block M: Uncertainty Quantification/Predictive Time-State-Space with confidence levels to Block N: Action Planning, Block O: (Kinodynamic-) Motion Planning: Eventually consideration of the vehicle dynamics, and Block F: Intelligent Sensor Network and Situation data bases.
Description of Figures
Fig. 1 shows the communication structure in the communication between the vehicle and cloud; Right: Spatial triggering of the communication sequence in the transition to a new cell
Fig. 2 shows the communication structure and concept view of whole system
Fig. 3 shows the cell for motion planning
Fig. 4 shows the subdivision of urban area for the navigation
Fig. 5 shows the flow chart of the main communication sequence with processing tasks
Claims (11)
1. Method for enhancing maps with additional data to generate one or more predictions of other road users movements in order to influence the decision making in navigation and motion planning of a vehicle, comprising in a first step the measurement of the vehicle's ego position and orientation in a second step the receiving of position information and optionally motion information of surrounding road users including pedestrians from vehicle on-board distance and position sensors such as radar, lidar, ultrasound sensors, in a third step the receiving of optionally position and optionally motion information of road users in the surrounding of its own position (also outside of the perception area of the vehicle) and along the further planned trip route via wireless communication from sources outside the vehicle, especially from cloud services of road side infrastructure devices, in a fourth step the look up of specific precomputed statistical map information with respect to the position and type of each surrounding road user, in a fifth step the use of looked up data and the position and motion information of the road users to predict one or more possible movements of the surrounding road users, and in a sixth step the use of a subset of the possible movement predictions to use uncertainty quantification techniques for safe decision making in a seventh step the use of predicted obstacle positions with confidence levels to influence the vehicles motion such as steering, acceleration and braking.
2. Method according to claim 1 comprising instead of the third, fourth and fifth step a step where the information of the first and the second step are send wirelessly from the vehicle to a device outside the vehicle, especially a cloud service, where the specific enhanced map for the surrounding road users is looked up and the position and motion information of the road users are used to predict one or more possible movements of the surrounding road users and where the possible motion predictions or the final movement avoiding the surrounding road users is send back wirelessly to the vehicle.
3. Method according to claim 1 and 2, characterized by the localization, state prediction and assessment of dynamic obstacles in uncertain and dynamic environment, especially in urban environment, comprising the subdivision of uncertain and dynamic environment into sub-areas, the consideration of the static environment structure information including roads and buildings based on information from wireless internet communication and the description of the environment structure by means of graphs, the consideration of dynamic environment information from cloud services such as smart phone positions, camera images for person localization and availability of groups of pedestrians due to public events or opening times, vehicle on-board sensor information.
4. Method according to claim 1 to 3, characterized by wireless communication devices between cloud services and vehicles to consider both static and dynamic obstacles such as road users.
5. Method according to claim 1 to 4, characterized by processing units for obstacle motion prediction model based on information from cloud services and wireless information exchange.
6. Method according to claim 4, characterized by carrying out cost-intensive calculation and optimization processes by means of cloud services and by communication of results to the vehicle.
7. Method according to claim 1 to 6, comprising information fusion units of vehicle on-board sensor information and obstacle prediction and cloud service information for the motion planning of the vehicle.
8. Method according to claim 1 to 7, characterized by the usage of uncertainty quantification units for risk evaluation and decision making.
9. Method according to claims 1 to 8, characterized by motion prediction and optimization units of manifold learning and Gaussian Processes for obstacle movement prediction and situation recognition with pattern recognition units.
10. Method according to claims 1 to 9, characterized by carrying out warning or redirection of the vehicle in case of potential collisions between pedestrians and other vulnerable road users and the vehicle.
11. Method according to claim 1 to 10 characterized by an a priori allocation of required communication resources and communication time slots within the sub-areas along the planned route for the predicted time when the vehicle will be in the respective sub-area.
Intellectual
Property
Office
Application No: GB 1706922.0
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1706922.0A GB2562049A (en) | 2017-05-02 | 2017-05-02 | Improved pedestrian prediction by using enhanced map data in automated vehicles |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1706922.0A GB2562049A (en) | 2017-05-02 | 2017-05-02 | Improved pedestrian prediction by using enhanced map data in automated vehicles |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB201706922D0 GB201706922D0 (en) | 2017-06-14 |
| GB2562049A true GB2562049A (en) | 2018-11-07 |
Family
ID=59010984
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1706922.0A Withdrawn GB2562049A (en) | 2017-05-02 | 2017-05-02 | Improved pedestrian prediction by using enhanced map data in automated vehicles |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2562049A (en) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110275933A (en) * | 2019-06-26 | 2019-09-24 | 广州小鹏汽车科技有限公司 | Vehicle running synchronous display method and device, terminal and computer equipment |
| DE102019127176A1 (en) * | 2019-10-09 | 2021-04-15 | Ford Global Technologies, Llc | Controlling an autonomous vehicle |
| WO2021099111A1 (en) * | 2019-11-19 | 2021-05-27 | Robert Bosch Gmbh | Device and method for processing vehicle environment sensor data |
| US11447129B2 (en) | 2020-02-11 | 2022-09-20 | Toyota Research Institute, Inc. | System and method for predicting the movement of pedestrians |
| WO2023005223A1 (en) * | 2021-07-27 | 2023-02-02 | 北京三快在线科技有限公司 | Trajectory planning method and apparatus, storage medium, device, and computer program product |
| US11878684B2 (en) | 2020-03-18 | 2024-01-23 | Toyota Research Institute, Inc. | System and method for trajectory prediction using a predicted endpoint conditioned network |
| US12079702B2 (en) | 2020-11-24 | 2024-09-03 | Toyota Motor Engineering & Manufacturing North America, Inc. | Driving automation device to mitigate the risk of other road users |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11861458B2 (en) * | 2018-08-21 | 2024-01-02 | Lyft, Inc. | Systems and methods for detecting and recording anomalous vehicle events |
| WO2020078550A1 (en) * | 2018-10-17 | 2020-04-23 | Nokia Technologies Oy | Virtual representation of non-connected vehicles in a vehicle-to-everything (v2x) system |
| CN112233800B (en) * | 2020-11-19 | 2024-06-14 | 吾征智能技术(北京)有限公司 | Disease prediction system based on abnormal behaviors of children |
| CN117830450A (en) * | 2023-12-29 | 2024-04-05 | 广州小鹏自动驾驶科技有限公司 | Road network construction method and device and electronic equipment |
| CN118269968B (en) * | 2024-06-04 | 2024-09-24 | 吉林大学 | Prediction method of automatic driving collision risk fused with online map uncertainty |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080097699A1 (en) * | 2004-12-28 | 2008-04-24 | Kabushiki Kaisha Toyota Chuo Kenkyusho | Vehicle motion control device |
| US20100100324A1 (en) * | 2008-10-22 | 2010-04-22 | Toyota Motor Engineering & Manufacturing North America, Inc. | Communication based vehicle-pedestrian collision warning system |
| US8457827B1 (en) * | 2012-03-15 | 2013-06-04 | Google Inc. | Modifying behavior of autonomous vehicle based on predicted behavior of other vehicles |
| US20150338497A1 (en) * | 2014-05-20 | 2015-11-26 | Samsung Sds Co., Ltd. | Target tracking device using handover between cameras and method thereof |
| US20160363935A1 (en) * | 2015-06-15 | 2016-12-15 | Gary Shuster | Situational and predictive awareness system |
| US20170016740A1 (en) * | 2015-07-16 | 2017-01-19 | Ford Global Technologies, Llc | Method and apparatus for determining a vehicle ego-position |
| US9612123B1 (en) * | 2015-11-04 | 2017-04-04 | Zoox, Inc. | Adaptive mapping to navigate autonomous vehicles responsive to physical environment changes |
-
2017
- 2017-05-02 GB GB1706922.0A patent/GB2562049A/en not_active Withdrawn
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080097699A1 (en) * | 2004-12-28 | 2008-04-24 | Kabushiki Kaisha Toyota Chuo Kenkyusho | Vehicle motion control device |
| US20100100324A1 (en) * | 2008-10-22 | 2010-04-22 | Toyota Motor Engineering & Manufacturing North America, Inc. | Communication based vehicle-pedestrian collision warning system |
| US8457827B1 (en) * | 2012-03-15 | 2013-06-04 | Google Inc. | Modifying behavior of autonomous vehicle based on predicted behavior of other vehicles |
| US20150338497A1 (en) * | 2014-05-20 | 2015-11-26 | Samsung Sds Co., Ltd. | Target tracking device using handover between cameras and method thereof |
| US20160363935A1 (en) * | 2015-06-15 | 2016-12-15 | Gary Shuster | Situational and predictive awareness system |
| US20170016740A1 (en) * | 2015-07-16 | 2017-01-19 | Ford Global Technologies, Llc | Method and apparatus for determining a vehicle ego-position |
| US9612123B1 (en) * | 2015-11-04 | 2017-04-04 | Zoox, Inc. | Adaptive mapping to navigate autonomous vehicles responsive to physical environment changes |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110275933A (en) * | 2019-06-26 | 2019-09-24 | 广州小鹏汽车科技有限公司 | Vehicle running synchronous display method and device, terminal and computer equipment |
| CN110275933B (en) * | 2019-06-26 | 2022-05-13 | 广州小鹏汽车科技有限公司 | Vehicle running synchronous display method and device, terminal and computer equipment |
| DE102019127176A1 (en) * | 2019-10-09 | 2021-04-15 | Ford Global Technologies, Llc | Controlling an autonomous vehicle |
| WO2021099111A1 (en) * | 2019-11-19 | 2021-05-27 | Robert Bosch Gmbh | Device and method for processing vehicle environment sensor data |
| US11447129B2 (en) | 2020-02-11 | 2022-09-20 | Toyota Research Institute, Inc. | System and method for predicting the movement of pedestrians |
| US11878684B2 (en) | 2020-03-18 | 2024-01-23 | Toyota Research Institute, Inc. | System and method for trajectory prediction using a predicted endpoint conditioned network |
| US12079702B2 (en) | 2020-11-24 | 2024-09-03 | Toyota Motor Engineering & Manufacturing North America, Inc. | Driving automation device to mitigate the risk of other road users |
| WO2023005223A1 (en) * | 2021-07-27 | 2023-02-02 | 北京三快在线科技有限公司 | Trajectory planning method and apparatus, storage medium, device, and computer program product |
Also Published As
| Publication number | Publication date |
|---|---|
| GB201706922D0 (en) | 2017-06-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| GB2562049A (en) | Improved pedestrian prediction by using enhanced map data in automated vehicles | |
| Niranjan et al. | Deep learning based object detection model for autonomous driving research using carla simulator | |
| Yoon et al. | Interaction-aware probabilistic trajectory prediction of cut-in vehicles using gaussian process for proactive control of autonomous vehicles | |
| US11501449B2 (en) | Method for the assessment of possible trajectories | |
| Laugier et al. | Probabilistic analysis of dynamic scenes and collision risks assessment to improve driving safety | |
| CN116323364A (en) | Waypoint Prediction and Motion Forecasting for Vehicle Motion Planning | |
| JP7625679B2 (en) | Prediction device, prediction method, program, and vehicle control system | |
| Kim et al. | Cooperative autonomous driving: A mirror neuron inspired intention awareness and cooperative perception approach | |
| CN111788571A (en) | vehicle tracking | |
| WO2020200502A1 (en) | Method and system for supporting autonomous driving of an autonomous vehicle | |
| GB2563400A (en) | Method and process for co-simulation with virtual testing of real environments with pedestrian interaction | |
| WO2022156181A1 (en) | Movement trajectory prediction method and apparatus | |
| Kim et al. | Probabilistic threat assessment with environment description and rule-based multi-traffic prediction for integrated risk management system | |
| Ramakrishnan et al. | Applying deep convolutional neural network (DCNN) algorithm in the cloud autonomous vehicles traffic model. | |
| Chen | Multimedia for autonomous driving | |
| De Borba et al. | Increasing safety of automated driving by infrastructure-based sensors | |
| Singh | Trajectory-prediction with vision: A survey | |
| Sagar et al. | Artificial intelligence in autonomous vehicles-a literature review | |
| Goebl et al. | Design and capabilities of the Munich cognitive automobile | |
| GB2564897A (en) | Method and process for motion planning in (un-)structured environments with pedestrians and use of probabilistic manifolds | |
| Szántó et al. | Trajectory planning of automated vehicles using real-time map updates | |
| Perla et al. | Implementation of autonomous cars using machine learning | |
| LR et al. | Prospective study on challenges faced in a perception system | |
| Shaterabadi et al. | Artificial intelligence for autonomous vehicles: Comprehensive outlook | |
| Arya et al. | A review of the applications and future scope of artificial intelligence in smart transport |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |