[go: up one dir, main page]

GB2633588A - System, device, and method for predicting an accident probability associated with a vehicle - Google Patents

System, device, and method for predicting an accident probability associated with a vehicle Download PDF

Info

Publication number
GB2633588A
GB2633588A GB2314010.6A GB202314010A GB2633588A GB 2633588 A GB2633588 A GB 2633588A GB 202314010 A GB202314010 A GB 202314010A GB 2633588 A GB2633588 A GB 2633588A
Authority
GB
United Kingdom
Prior art keywords
vehicle
accident
image data
controller
indicator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2314010.6A
Other versions
GB202314010D0 (en
Inventor
P P Ajay
Kannan Srividhya
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aumovio Autonomous Mobility Germany GmbH
Original Assignee
Continental Autonomous Mobility Germany GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Autonomous Mobility Germany GmbH filed Critical Continental Autonomous Mobility Germany GmbH
Priority to GB2314010.6A priority Critical patent/GB2633588A/en
Publication of GB202314010D0 publication Critical patent/GB202314010D0/en
Publication of GB2633588A publication Critical patent/GB2633588A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60CVEHICLE TYRES; TYRE INFLATION; TYRE CHANGING; CONNECTING VALVES TO INFLATABLE ELASTIC BODIES IN GENERAL; DEVICES OR ARRANGEMENTS RELATED TO TYRES
    • B60C23/00Devices for measuring, signalling, controlling, or distributing tyre pressure or temperature, specially adapted for mounting on vehicles; Arrangement of tyre inflating devices on vehicles, e.g. of pumps or of tanks; Tyre cooling arrangements
    • B60C23/20Devices for measuring or signalling tyre temperature only

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

A first vehicle (101,Fig.1) predicts a probability of an accident with a second vehicle (102,Fig.1) by using an image sensor 502a,b to obtain thermal image data 502 associated with the second vehicle. A controller (110,Fig.1) determines an accident indicator comprising a temperature of a tyre of the second vehicle and indicative of the accident probability of the vehicles. When the accident indicator exceeds an acceptable range or threshold, the controller generates a corresponding output signal (e.g. deceleration, emergency brake or direction control signal) for the first vehicle to respond to the accident probability. The sensor may be a stereo thermal sensor. The image data may comprise depth data 504 generated using a Pyramid Stereo Matching network (PSMnet) and the controller may extract features from the images and the depth map using Aggregate View Object Detection (AVOD) based architecture. The accident indicator may comprise pose estimation 506 of the second vehicle.

Description

SYSTEM, DEVICE, AND METHOD FOR PREDICTING AN ACCIDENT PROBABILITY ASSOCIATED WITH A VEHICLE
TECHNICAL FIELD
Various aspects of this disclosure relate to systems, devices, and methods for predicting accident probabilities associated with vehicles.
BACKGROUND !0
The following discussion of the background art is intended to facilitate an understanding of the present disclosure only. It should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was published, known, or is part of the common general knowledge of the person skilled in the art in any jurisdiction as of the priority date of the disclosure.
Tyre explosion is one cause of the vehicle accidents. The existing systems in forward collision avoidance systems focus on the distances between vehicles. Further, many existing solutions are based on radars, lidars and monocular cameras (Camera based Forward Collision Avoidance System, IRJET). Radars provide sparse 3D clouds. Even though lidars provide dense 3D point clouds, lidars are very expensive compared to cameras and radars. Monocular cameras are used to estimate object distances from the camera and estimate 3D pose based on the captured RGB images which require proper lighting for achieving good results. Accordingly, there exists a need for an improved device, system and/or method for predicting potential accidents associated with vehicles, that seek to address at least one of the aforementioned issues.
SUMMARY
Various embodiments comprise a system, device, and method to predict a potential accident associated with one or more second vehicles while the one or more second vehicles are in the proximity of a first vehicle, particularly wherein the first vehicle is an ego vehicle.
According to an aspect of the present disclosure, there is provided a system for use with a first vehicle to predict an accident probability of a second vehicle in relation with the first vehicle, the system comprising: at least one image sensor configured to obtain image data associated with the second vehicle; a controller configured to determine at least one accident indicator of the second vehicle from the image data, the at least one accident indicator indicative of the accident probability of the second vehicle in relation to the first vehicle, wherein the controller is further configured to I 0 generate an output signal based on the at least one accident indicator, when the at least one accident indicator exceeds an acceptable range or a threshold, for the first vehicle to respond to the accident probability of the second vehicle in relation to the first vehicle, and wherein the image data comprises thermal data and wherein the at least one accident indicator of the second vehicle comprises a temperature indicator of at least one tyre of the second vehicle.
The system of the present disclosure seeks to provide a provident system for a first vehicle to predict and respond when a second vehicle has a probability to have a tyre exploded/burst or lost grip and collide with the first vehicle.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the at least one image sensor comprises a stereo camera having two or more thermal image sensors. According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the controller is configured to obtain the temperature indicator from a segment of the image data in a bounding box corresponding to the at least one tyre of the second vehicle.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the image data comprises depth data and the at least one accident indicator of the second vehicle comprises a pose estimation of the second vehicle.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the controller is configured to perform data fusion of 2D image data and 3D point cloud obtained from the image data.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the controller is s configured to generate a depth map using a custom Pyramid Stereo Matching network (PSMnet) based on the depth data.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the controller is further configured to extract features from images captured by the at least one image io sensor and the depth map using Aggregate View Object Detection (AVOD) based architecture.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the at least one accident indicator of the second vehicle comprises a depth estimate of the second vehicle obtained based on the depth map and wherein the controller is further configured to generate the output signal based on the depth estimate, when the depth estimate is less than a distance threshold.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the output signal comprises at least one of: a deceleration control signal, an emergency brake control signal, a directional change control signal and/or a maintain direction control signal. According to another aspect of the present disclosure, there is provided a first vehicle, comprising the system as described.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the at least one image sensor comprises a stereo camera having two or more image sensors and positioned at a rear portion of the first vehicle.
According to another aspect of the present disclosure, there is provided a computer-implemented method for predicting an accident probability of a second vehicle in relation with a first vehicle, the method comprising obtaining image data associated with the second vehicle; determining at least one accident indicator of the second vehicle from the image data, the at least one accident indicator indicative of the accident probability of the second vehicle in relation with the first vehicle; generating, based on the at least one accident indicator, when the at least one accident indicator exceeds an acceptable range or a threshold, an output signal for the first vehicle to respond to the accident probability of the second vehicle in relation with the first vehicle and wherein the image data comprises thermal data and wherein the at least one accident indicator of the second vehicle comprises a temperature indicator of at least one tyre of the second vehicle.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the at least one l0 accident indicator comprises a pose estimation of the second vehicle, and/or a depth estimate of the second vehicle.
According to an embodiment which may be combined with any above-described embodiment or with any below described further embodiment, the image data comprises depth data, and the method further comprises: generating a depth map using a Pyramid Stereo Matching network (PSMnet) based on the depth data; and extract features from images captured by the at least one image sensor (104) and the depth map using Aggregate View Object Detection (AVOD) based architecture. According to another aspect of the present disclosure, there is provided a non-transitory computer-readable medium storing computer executable code comprising instructions that cause a processor to carry out the method as described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure will be better understood with reference to the detailed description when considered in conjunction with the non-limiting examples and the accompanying drawings, in which: FIG. 1 is a schematic diagram of a system for use with a first vehicle to predict an accident probability associated with a second vehicle according to some embodiments.
FIG. 2A shows a block diagram of an exemplary deep learning model according to some embodiments and FIG. 2B shows a block diagram of an exemplary data fusion model according to some embodiments.
FIG. 3 is a schematic diagram of a controller suitable for use with an anti-collision s system of a vehicle and/or an auto-driving control unit of an autonomous vehicle.
FIG. 4 is a flow chart depicting a method for predicting an accident probability of a second vehicle in relation to a first vehicle.
FIG. 5 shows a block diagram depicting a specific use case of the method for predicting an accident probability of a second vehicle in relation to a first vehicle. !0
DETAILED DESCRIPTION
The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure. Other embodiments may be utilized and structural, and logical changes may be made without departing from the scope of the disclosure. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.
The embodiments described in the context of one of the devices, systems, or methods are analogously valid for the other devices, systems, or methods. Similarly, the embodiments described in the context of a device are analogously valid for a system or a method, and vice-versa.
Features that are described in the context of an embodiment may correspondingly be applicable to the same or similar features in the other embodiments. Features that are described in the context of an embodiment may correspondingly be applicable to the other embodiments, even if not explicitly described in these other embodiments. Furthermore, additions and/or combinations and/or alternatives as described for a feature in the context of an embodiment may correspondingly be applicable to the same or similar feature in the other embodiments.
In the context of the various embodiments, the articles "a", "an", and "the" as used with regard to a feature or element include a reference to one or more of the features or elements.
As used herein, the term "and/or" includes any and all combinations of one or more s of the associated listed items.
While terms such as "first", "second" etc., may be used to describe various vehicles, such vehicles are not limited by the above terms. The above terms are used only to distinguish one vehicle from another, and do not define an order and/or significance of the vehicles.
lo The term "data" as used herein may be understood to include information in any suitable analog or digital form, e.g., provided as a file, a portion of a file, a set of files, a signal or stream, a portion of a signal or stream, a set of signals or streams, and the like. Further, the term "data" may also be used to mean a reference to information, e.g., in form of a pointer. The term "data", however, is not limited to the aforementioned examples and may take various forms and represent any information as understood in the art. Any type of information, as described herein, may be handled for example via one or more processors in a suitable way, e.g. as data.
The terms "processor" or "controller" as, for example, used herein may be understood as any kind of entity that allows handling data. The data may be handled according to one or more specific functions executed by the processor or controller. Further, a processor or controller as used herein may be understood as any kind of circuit, e.g., any kind of analog or digital circuit. A processor or a controller may thus be or include an analog circuit, digital circuit, mixed-signal circuit, logic circuit, processor, microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), Digital Signal Processor (DSP), Field Programmable Gate Array (FPGA), integrated circuit, Application Specific Integrated Circuit (ASIC), etc., or any combination thereof. Any other kind of implementation of the respective functions, which will be described below in further detail, may also be understood as a processor, controller, or logic circuit. It is understood that any two (or more) of the processors, controllers, or logic circuits detailed herein may be realized as a single entity with equivalent functionality or the like, and conversely that any single processor, controller, or logic circuit detailed herein may be realized as two (or more) separate entities with equivalent functionality or the like.
The term "memory" detailed herein may be understood to include any suitable type of memory or memory device, e.g., a hard disk drive (HDD), a solid-state drive (SSD), a flash memory, etc. The term "module" detailed herein refers to, forms part of, or includes an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described I 0 functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module may include memory (shared, dedicated, or group) that stores code executed by the processor.
Differences between software and hardware implemented data handling may blur. A processor, controller, and/or circuit detailed herein may be implemented in software, hardware, and/or as a hybrid implementation including software and hardware.
In the following, aspects or embodiments will be described in detail.
According to an aspect of the present disclosure, there is a system for use with a first vehicle to predict a potential accident associated with a second vehicle. The system may be installed as part of an anti-collision system on the first vehicle, and/or as part of an autonomous driving control unit (ADCU) of the first vehicle, in the case where the first vehicle is an autonomous or semi-autonomous vehicle. In some embodiments, the system may be or form pad of a driver assistance system of the first vehicle. Particularly, the system may seek to predict and respond when the second vehicle has a tyre explosion accident, thereby avoiding to involve in the tyre explosion accident of the second vehicle.
Referring to FIG. 1, the system 100 for use with a first vehicle 101 to predict an accident probability of a second vehicle 102 in relation with the first vehicle 101 comprises at least one image sensor 104 configured to obtain image data 106 associated with the second vehicle 102. While "image(s)" and "image data" may be used interchangeably in the description, it should be appreciated that image(s) refers to the image(s) captured by the image sensor and image data refers to the data (e.g. information) obtained from the captured image and that the meanings of "image(s)" and "image data" should be appropriately interpreted in the context of the description.
In some embodiments, the at least one image sensor 104 may be a stereo camera equipped with two or more thermal sensors. In some embodiments, the at least one s image sensor 104 may include two or more thermal sensors each of which is included in a camera. In other words, the image sensor 104 may be a thermal camera having two or more lens, such that the images captured through the two or more lens may provide depth data and may be used to generate 3D vision. The image sensor 104 may capture images concurrently form slightly different positions In by the two or more lens of the thermal camera, to generate images that contain depth information. The two or more lens of the stereo and/or thermal camera(s) may provide 3-dimension (3D) vision of the second vehicle 102 (e.g. generate dense 3D point clouds). In some embodiments, a single stereo camera having two or more thermal sensors may be used so as to be cost-effective and accordingly thermal images captured by the single stereo camera may be solitary input for predicting the accident probability of the second vehicle 102 in relation to the first vehicle 101. Thermal images may be captured well even under low light and bad weather conditions.
In some embodiments, the at least one image sensor 104 may include a rear-view image sensor positioned at a rear portion of the first vehicle 101, include a front-view image sensor positioned at a front portion of the first vehicle 101, and/or include one or two side-view image sensors positioned at each side portion of the first vehicle 101.
According to various non-limiting embodiments, the image sensor 104 may be configured to obtain thermal images of the second vehicle 102. Temperatures of the second vehicle 102, in particular, the tyre temperature of the second vehicle 102, may be determined from the thermal images. In some embodiments, the image sensor 104 may be configured to obtain images from which depth data may be obtained and generate dense 3D point clouds. In some embodiment, the image sensor 104 may be configured to obtain video stream of the second vehicle 102 in a real-time or near real-time environment, the video stream comprising multiple images or image frames. The video stream may be converted to one or more suitable data formats and stored in a database (not shown).
According to various non-limiting embodiments, one or more optional detection sensors 108, which may include a radar sensor, an infrared sensor, a sonar sensor, s an ultrasound sensor, and/or other types of proximity sensors, may be configured to detect the second vehicle 102 in the proximity of the first vehicle 101. In some embodiments, at least one detection sensor 108 may be positioned at each side of the first vehicle 101 to detect one or more second vehicles 102 from either side of the first vehicle 101. In some embodiments, the detection sensor 108 may be a radar o sensor configured to send and receive electromagnetic radiation, such as radio waves, within a frequency range. The radar sensor 108 may be configured to emit radio waves and receive reflected radio waves as an indication of the presence of one or more second vehicles 102 in the proximity of the first vehicle 101.
In some embodiments, the controller 110 may be configured to compute a distance measure between the first vehicle 101 and the second vehicle 102 based on the radio signals received by the detection sensor 108. The computed distance measure based on the received motion/proximity signals may be compared with a threshold. A computed distance measure that is less than the threshold may indicate that the second vehicle 102 is too near to the first vehicle 101 and may trigger a warning notification and/or a control signal to decelerate and/or swerve the first vehicle 101 so as to avoid a collision.
According to various non-limiting embodiments, a controller 110, which may include one or more processors, may be arranged in data communication with the image sensor 104 to obtain or receive the image data 106. The controller 110 may be configured to determine at least one accident indicator based on the image data 106 of the second vehicle 102 (see FIG. 2), the at least one accident indicator indicative of the accident probability of the second vehicle 102 in relation to the first vehicle 101. That is, the accident is due to the second vehicle 102, for example, a tyre explosion of the second vehicle 102, and consequently the accident may cause a further accident between the first vehicle 101 and the second vehicle 102 in the proximity of the first vehicle 101. Advantageously, the system 100 may prevent such a further accident between the first vehicle 101 and the second vehicle 102 by providing the first vehicle 101 a provident warning of the likelihood of the accident due to the second vehicle 102.
According to various non-limiting embodiments, the controller 110 may be configured to analyse the image data 106 using one or more image processing algorithms to determine the at least one accident indicator. An output signal 116 may then be generated or produced by the controller 110 when the at least one accident indicator exceed an acceptable range or a threshold. In the context of various embodiments, "an acceptable range" may refer to a safety range that is defined by a lower bound and an upper bound, e.g. in a safe range of from 60°C to 150°C. As a non-limiting example, tyre grip may be the maximum within an optimal operating temperature range defined by a lower bound and an upper bound and reduce at out of such optimal temperature range (e.g. at a temperature of 200°C), thereby prone to cause an accident. In the context of various embodiments, "a threshold" may refer to a maximum or minimum limit that is considered to be safe, e.g. a minimum limit of 1 m.
IS As a non-limiting example, a distance between vehicles must be greater than a threshold to be safe.
In some embodiments, the acceptable range or threshold may be pre-determined during a training stage or adjustable as and when required. The output signal 116 may be in the form of one or more control signals and/or one or more warning notifications to the first vehicle 102 to respond to the potential accident with the second vehicle 102. The control signal(s) may include at least one of a deceleration control signal, an emergency brake control signal, a directional change signal, and/or a maintain direction control signal. The one or more warning notification(s) may be in the form of an audio alert, and/or a visual alert. The output signal 116 may be sent to one or more actuator units 118 to effect a corresponding action associated with the output signal 116. Examples of the one or more actuator units 118 include a braking system, a tyre directional control system, and/or a visual or audio warning system. In some embodiments, the controller 110 may be configured to continuously or periodically (e.g. for a predetermined period of 10, 20, or 30 seconds) determine the at least one accident indicator until the at least one accident indicator is within the acceptable range or the threshold, and the controller 110 may then revert to operate or be configured to revert to a default state, for example, an idle state, a background monitoring state, or a stand-by state.
In some embodiments, the controller 110, in the background monitoring state, may be configured to monitor a status of the second vehicle 101, for example a distance s change with the first vehicle 101 (e.g. by the detection sensors 108), which triggers the system 100 to be activated and start obtaining image data of the second vehicle 102.
FIG. 2A shows a block diagram of an exemplary deep learning model and FIG. 2B shows a block diagram of an exemplary data fusion model. The exemplary deep io learning model as shown in FIG. 2A and the exemplary data fusion model as shown in FIG. 2B may be implemented in the system 100 that may be installed in the first vehicle 101 to predict an accident probability of the second vehicle 102 in relation to the first vehicle 101.
The system 100 may be part of an anti-collision system associated with the first vehicle 101. In some embodiments, system 100 may be activated when the second vehicle 102 is detected by the first vehicle 101 to be changing a speed and consequently changing a distance with the first vehicle 101. The image sensor 104 may then be activated to capture successive images or videos 106 of one or more second vehicles 102 in the proximity of the first vehicle 101.
According to various non-limiting embodiments, the image data 106 may include thermal data. The at least one accident indicator of the second vehicle 102 may include a temperature indicator of at least one tyre of the second vehicle 102, indicative of a temperature of at least one tyre of the second vehicle 102, that may be determined from a segment of the image data 106 corresponding to the at least one tyre of the second vehicle 102. The controller 110 may be configured to obtain the temperature indicator from the segment image data 106 in a bounding box corresponding to the at least one tyre of the second vehicle 102. The bounding box may be used for cropping/extracting the tyre region from the captured thermal image of the scene where the second vehicle 102 is. Pixels of the thermal image (i.e. the image data 106) may contain information about temperature levels of corresponding regions in the scene. The extracted tyre region in the bounding box may be used for the tyre temperature estimation.
According to various non-limiting embodiments, the image data 106 may include depth data and the at least one accident indicator of the second vehicle may include a 3D pose estimation of the second vehicle 102 (e.g. position and orientation of the second vehicle 102).
s As shown in FIG. 2A, the image data 106 (e.g. including a left image and a right image) captured by the at least one image sensor 104 may be processed by the controller 110 using a convolutional neural network (CNN). Weights may be shared between the processing of the left image and the right image in the CNN. Spatial Pyramid Pooling (SPP) Module may be applied to the CNN as SPP layer on top of the last convolutional layer for object detection. The controller 110 may be configured to generate a 3D depth map using a custom Pyramid Stereo Matching network (PSMnet) based on the depth data. The PSMnet may be a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The SPP module may take advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN may learn to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The stacked hourglass networks may be used for pose estimation.
The PSMNet may be trained using three stereo datasets: (a) Scene Flow: a large-scale synthetic dataset containing 35454 training and 4370 testing images; (b) KITTI 2015: a real-world dataset with street views from a driving car. It contains 200 training stereo image pairs with sparse ground-truth disparities obtained using LiDAR and another 200 testing image pairs without ground-truth disparities; and (c) KITTI 2012: a real-world dataset with street views from a driving car. It contains 194 training stereo image pairs with sparse ground-truth disparities obtained using LiDAR and 195 testing image pairs without ground-truth disparities.
The PSMNet may be trained using the following command on the Scene Flow dataset: python main.py --maxdisp 192 I --model stackhourglass --data path (your scene flow data folder)I --epochs 10 1 --loadmodel (optional)1 --savemodel (path for saving model) Further, the following command may he used to finetune the PSMNet on the KITT, 2015 dataset: s python finetune.py -maxdisp 192 1 --model stackhourglass1 --datatype 20151 --datapath (KITTI 2015 training data folder)1 --epochs 300 1 lo --loadmodel (pretrained PSMNet) 1 --savemodel (path for saving model) Furthermore, the following command may be used to evaluate the trained PSMNet on the KITTI 2015 dataset: python submission.py -maxdisp 192 --model stackhourglassl --KITTI 20151 --datapath (KITTI 2015 test data folder)1 --loadmodel (finetuned PSMNet) 1 The PSMNet architecture may be implemented using PyTorch. All models may be end-to-end trained with Adam (e.g. 131 = 0.9, = 0.999). Color normalization may be performed on the entire dataset for data preprocessing. During training, images may be randomly cropped (e.g. size H = 256 and W = 512). The maximum disparity (D) may be set to 192. A constant learning rate of 0.001 may be set for 10 epochs during training. Fine-tuning on the KITTI training may be set for 300 epochs. The learning rate of this fine-tuning may begin at 0.001 for the first 200 epochs and 0.0001 for the remaining 100 epochs. The batch size may be set to 12 for the training. As shown in FIG. 2B, the controller 110 may be configured to extract features from 2D images captured by the at least one image sensor 104 and the 3D depth map using Aggregate View Object Detection (AVOD) based architecture to provide feature maps at the first stage. At the first stage of AVOD feature extraction, data from the 2D images and the 3D depth map may be encoded by the respective encoders; and the encoded data may then be processed by Region of Interest Align (Rol Align) (e.g. to crop the feature maps so as for a region proposal) and data fusion using anchor grid (e.g. 3D anchors). The object detection may be performed at the first stage. At the second stage, the controller 110 may be further configured to perform pose estimation and location prediction based on the information from the s feature maps of 2D images and 3D point cloud obtained at the first stage. The pose estimation may be performed in a top-down approach by generating region proposals at the first stage and estimating poses in the defined regions.
FIG. 3 shows an embodiment of the controller 110 in the form of a device, such as an autonomous driving control unit (ADCU) 200, for use in a first vehicle 101 for m detection of a second vehicle 102. The ADCU 200 comprises an input module 204 configured to obtain image data 106 associated with the second vehicle 102 and optionally a distance measure associated with a distance between the first vehicle 101 and the second vehicle 102; and an analysis module 206 arranged in data communication with the input module 204. The analysis module 206 may be configured to determine an accident indicator from the image data 106. In some embodiments, the analysis module 206 is configured to determine an accident indicator of the second vehicle 102, and to generate an output signal in response to the accident predictor of the second vehicle 102. The output signal may be generated based on the determined accident indicator and the distance measure, by the controller 110.
The ADCU 200 may comprise an output module 208 to send the output signal (e.g., at least one of the control signals and/or warning notifications mentioned above) in response to the accident probability associated with the second vehicle 102. The analysis module 206 may include an image processing module 210 and optionally a distance calculator module 212. The image processing module 210 may include an image processing algorithm for the determination of the accident indicator from the captured image data of the second vehicle 102. The outcome of the determination, which may be in the form of a binary one ("1") indicating a "yes", and a binary zero ("0") indicating a "no", may be sent to the output module 208.
In some embodiments, the image processing module 210 may include a machine learning algorithm trained to determine the accident indicator from the image data 106 collected as described herein. In some embodiments, the image processing module 210 may include a machine vision module.
The distance calculator module 212 may be configured to receive detection signals (e.g. radio wave signals) from the detection sensor(s) 108 and compute the distance s measure between the first vehicle 101 and the second vehicle 102 based on the signals received from the radar sensor(s) 108. The computed distance may be a lateral distance between the first vehicle 101 and the second vehicle 102.
FIG. 4 shows another embodiment of a method 400 for predicting an accident probability of a second vehicle 102 in relation with a first vehicle 101. The method lo 400 may be implemented as executable code stored in a computer-readable medium, which may be stored in a memory of the controller 110, 200. The executable code comprises instructions for predicting a potential accident associated with a second vehicle 102 with respect to a first vehicle 101. The method 400 may comprise the steps of: Step 402: obtaining image data 106 associated with the second vehicle 102; Step 404: determining at least one accident indicator of the second vehicle 102 from the image data 106, the at least one accident indicator indicative of the accident probability of the second vehicle 102 in relation with the first vehicle 101; and Step 408: generating, based on the at least one accident indicator when the at least one accident indicator exceeds an acceptable range or a threshold, an output signal for the first vehicle 101 to respond to the accident probability associated with the second vehicle 102. That is, the controller 110 may be configured to determine whether the at least one accident indicator exceeds an acceptable range, and in response to the determination that the at least one accident indicator exceeds the acceptable range, the controller 110 is configured to generate the output signal.
FIG. 5 shows another embodiment of a method 500 for predicting an accident probability of a second vehicle 102 in relation to the first vehicle 101, based on thermal images of the second vehicle 102 captured by the first vehicle 101. The method 500 is assumed to be implemented as executable code stored in a computer-readable medium, which may be stored in a memory of the controller 110, 200. The executable code comprises instructions for predicting an accident probability associated with a second vehicle 102 with respect to a first vehicle 101, and the method 500 may comprise the steps of: Step 502: The controller 110 or 200 installed in the first vehicle 101 is activated, and in a monitoring state. The image sensor(s) 104, including a left lens 502b and a right s lens 502b of the first vehicle 101 captures thermal image data (which may include video files) of the first vehicle 101.
Step 504: The controller 110 or 200 may be configured to generate a 3D depth map based on the thermal image data. The range of depth estimation may reach up to 100 m.
Step 506: The controller 110 or 200 may be configured to generate a 3D pose estimation of the second vehicle 102. The controller 110 or 200 may be configured to generate a 3D pose estimation of the second vehicle 102 using a pose estimation model, as described with reference to FIG. 2B, which may also be used for object detection. The pose estimation model may extract the features from the 2D images captured by the at least one image sensor 104 as described herein.
Step 508: The controller 110 or 200 may be configured to identify the tyre region with a bounding box in the image data 106.
Step 510: The controller 110 or 200 may be configured to determine tyre temperature based on the segment image data of the tyre region.
Step 512: The controller 110 or 200 may be configured to generate a collision warning signal when a depth estimate from the 3D depth map exceeds a distance threshold (e.g. less than the distance threshold), when 3D pose estimate coordinates exceeds a predefined range (e.g. out of the predefined range), or when a tyre temperature exceeds a predefined range (e.g. out of the predefined range). That is, the controller 110 may be configured to determine whether the at least one accident indicator exceeds an acceptable range, and in response to the determination that the at least one accident indicator exceeds the acceptable range, the controller 110 is configured to generate the collision warning signal.
Step 514: The controller 110 or 200 may be configured to output the collision warning signal for the first vehicle 101 to respond to the accident probability of the second vehicle 102 in relation with the first vehicle 101.
While the disclosure has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims.
REFERENCE SIGNS
100: System 101: First vehicle 102: Second vehicle 104: Image sensor(s) 106: Image data I() 108: detection sensor(s) 110: Controller 116: Controller output signal 118: Actuator unit 200: Autonomous driving control unit (ADCU) is 204: Input module 206: Analysis module 208: Output module 210: Image processing module 212: Distance calculator 400: Method 402-406: Method steps 500: Method 502-514: Method steps

Claims (15)

  1. CLAIMS1. A system (100) for use with a first vehicle (101) to predict an accident probability of a second vehicle (102) in relation to the first vehicle (101), the system (100) comprising: at least one image sensor (104) configured to obtain image data (106) associated with the second vehicle (102); a controller (110) configured to determine at least one accident indicator of the second vehicle (102) from the image data (106), the at least one accident indicator indicative of the accident probability of the second vehicle (102) in relation to the first vehicle (101), wherein the controller (110) is further configured to generate an output signal based on the at least one accident indicator, when the at least one accident indicator exceeds an acceptable range or a threshold, for the first vehicle (101) to respond to the accident probability of the second vehicle (102) in relation to the first vehicle (101), and wherein the image data (106) comprises thermal data and wherein the at least one accident indicator of the second vehicle (102) comprises a temperature indicator of at least one tyre of the second vehicle (102).
  2. 2. The system (100) according to claim 1, wherein the at least one image sensor (104) comprises a stereo camera having two or more thermal image sensors.
  3. 3. The system (100) according to claim 1, wherein the controller (110) is configured to obtain the temperature indicator from a segment of the image data (106) in a bounding box corresponding to the at least one tyre of the second vehicle (102).
  4. 4. The system (100) according to any one of claims 1 to 3, wherein the image data (106) comprises depth data and wherein the at least one accident indicator of the second vehicle (102) comprises a pose estimation of the second vehicle (102).
  5. 5. The system (100) according to claim 4, wherein the controller (110) is configured to perform data fusion of 2D image data and 3D point cloud obtained from the image data (104).
  6. 6. The system (100) according to claim 4, wherein the controller (110) is configured to generate a depth map using a Pyramid Stereo Matching network (PSMnet) based on the depth data.
  7. 7. The system (100) according to claim 6, wherein the controller (110) is further configured to extract features from images captured by the at least one image sensor (104) and the depth map using Aggregate View Object Detection (AVOD) based architecture.
  8. 8. The system (100) according to claim 4, wherein the at least one accident indicator of the second vehicle (102) comprises a depth estimate of the second vehicle (102) obtained based on the depth map and wherein the controller (110) is further configured to generate the output signal based on the depth estimate, when the depth estimate is less than a distance threshold.
  9. 9. The system (100) according to any one of claims 1 to 8, wherein the output signal (116) comprises at least one of a deceleration control signal, an emergency brake control signal, a directional change control signal and/or a maintain direction control signal.
  10. 10. A first vehicle (101), comprising the system (100) according to any one of claims 1 to 9.
  11. 11. The first vehicle (101) according to claim 10, wherein the at least one image sensor (104) comprises a stereo camera having two or more thermal image sensors positioned at a rear portion of the first vehicle (101).
  12. 12. A computer-implemented method (400) for predicting an accident probability of a second vehicle (102) in relation with a first vehicle (101), the method (400) comprising: obtaining (402) image data (106) associated with the second vehicle (102); determining (404) at least one accident indicator of the second vehicle (102) from the image data (106), the at least one accident indicator indicative of the accident probability of the second vehicle (102) in relation with the first vehicle (101); generating (406), based on the at least one accident indicator, when the at least one accident indicator exceeds an acceptable range or a threshold, an output signal for the first vehicle (101) to respond to the accident probability of the second vehicle (102) in relation with the first vehicle (101), wherein the image data (106) comprises thermal data and wherein the at least one accident indicator of the second vehicle (102) comprises a temperature indicator of at least one tyre of the second vehicle (102).
  13. 13. The method (400) of claim 12, wherein the at least one accident indicator further comprises a pose estimation of the second vehicle (102), and/or a depth estimate of the second vehicle (102).
  14. 14. The method (400) of claim 12 or claim 13, wherein the image data (106) comprises depth data, the method (400) further comprising: generating a depth map using a Pyramid Stereo Matching network (PSMnet) based on the depth data; and extract features from images captured by the at least one image sensor (104) and the depth map using Aggregate View Object Detection (AVOD) based architecture.
  15. 15. A non-transitory computer-readable medium storing computer executable code comprising instructions that cause a processor to carry out the method (400) according to any one of claims 12 to 14.
GB2314010.6A 2023-09-14 2023-09-14 System, device, and method for predicting an accident probability associated with a vehicle Pending GB2633588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2314010.6A GB2633588A (en) 2023-09-14 2023-09-14 System, device, and method for predicting an accident probability associated with a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2314010.6A GB2633588A (en) 2023-09-14 2023-09-14 System, device, and method for predicting an accident probability associated with a vehicle

Publications (2)

Publication Number Publication Date
GB202314010D0 GB202314010D0 (en) 2023-11-01
GB2633588A true GB2633588A (en) 2025-03-19

Family

ID=88507201

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2314010.6A Pending GB2633588A (en) 2023-09-14 2023-09-14 System, device, and method for predicting an accident probability associated with a vehicle

Country Status (1)

Country Link
GB (1) GB2633588A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007257133A (en) * 2006-03-22 2007-10-04 Nissan Motor Co Ltd Object detection system
US20090189752A1 (en) * 2008-01-25 2009-07-30 Taylor Ronald M Thermal radiation detector
US20100253540A1 (en) * 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Enhanced road vision on full windshield head-up display
US20180134281A1 (en) * 2016-11-16 2018-05-17 NextEv USA, Inc. System for controlling a vehicle based on thermal profile tracking
US20180236828A1 (en) * 2017-02-21 2018-08-23 Ford Global Technologies, Llc Tire-diagnosis system
US20210405185A1 (en) * 2020-06-30 2021-12-30 Tusimple, Inc. System and method providing truck-mounted sensors to detect trailer following vehicles and trailer conditions

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007257133A (en) * 2006-03-22 2007-10-04 Nissan Motor Co Ltd Object detection system
US20090189752A1 (en) * 2008-01-25 2009-07-30 Taylor Ronald M Thermal radiation detector
US20100253540A1 (en) * 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Enhanced road vision on full windshield head-up display
US20180134281A1 (en) * 2016-11-16 2018-05-17 NextEv USA, Inc. System for controlling a vehicle based on thermal profile tracking
US20180236828A1 (en) * 2017-02-21 2018-08-23 Ford Global Technologies, Llc Tire-diagnosis system
US20210405185A1 (en) * 2020-06-30 2021-12-30 Tusimple, Inc. System and method providing truck-mounted sensors to detect trailer following vehicles and trailer conditions

Also Published As

Publication number Publication date
GB202314010D0 (en) 2023-11-01

Similar Documents

Publication Publication Date Title
US11816852B2 (en) Associating LIDAR data and image data
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
US11327178B2 (en) Piece-wise network structure for long range environment perception
US11120280B2 (en) Geometry-aware instance segmentation in stereo image capture processes
US11727668B2 (en) Using captured video data to identify pose of a vehicle
CN111656396B (en) Falling object detection devices, vehicle-mounted systems, vehicles and computer-readable recording media
JP5297078B2 (en) Method for detecting moving object in blind spot of vehicle, and blind spot detection device
US20190049560A1 (en) Lidar-based object detection and classification
CN114495064A (en) Monocular depth estimation-based vehicle surrounding obstacle early warning method
CN112149460A (en) An obstacle detection method and device
US12243321B2 (en) Method for determining a semantic free space
CN113780064A (en) Target tracking method and device
CN114119955A (en) Method and device for detecting potential dangerous target
WO2023009180A1 (en) Lidar-based object tracking
KR102559936B1 (en) Method and apparatus of estimating depth information using monocular camera
CN111497741B (en) Collision early warning method and device
JP2013069045A (en) Image recognition device, image recognition method, and image recognition program
GB2633588A (en) System, device, and method for predicting an accident probability associated with a vehicle
US20230267749A1 (en) System and method of segmenting free space based on electromagnetic waves
CN115690719A (en) System and method for object proximity monitoring around a vehicle
US20250278944A1 (en) Apparatus, method, and non-transitory computer-readable medium for detecting moving object
US12142054B2 (en) Vehicles, systems and methods for determining an occupancy map of a vicinity of a vehicle
US20240375652A1 (en) Travel controller and method for controlling travel
GB2623840A (en) System, device, and method for detecting an intention and an action associated with a vehicle
HK40037982B (en) Method and device for obstacle detection, computer apparatus and storage medium