GB2638006A - Method and apparatus for classifiying terrain - Google Patents
Method and apparatus for classifiying terrainInfo
- Publication number
- GB2638006A GB2638006A GB2401887.1A GB202401887A GB2638006A GB 2638006 A GB2638006 A GB 2638006A GB 202401887 A GB202401887 A GB 202401887A GB 2638006 A GB2638006 A GB 2638006A
- Authority
- GB
- United Kingdom
- Prior art keywords
- terrain
- image
- section
- neural network
- class
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/61—Scene description
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Method for classifying a section of terrain by: capturing terrain images via an optical imaging sensor of a vehicle; inputting the images to an artificial neural network configured to determine image classification probabilities, the classification probabilities each indicating a probability of the terrain in the image belonging to a respective terrain class; determining and outputting a terrain class based on the classification probabilities. Also disclosed: training a neural network to determine terrain image classification likelihoods using first and second image datasets labelled as belonging to first and second terrain classes respectively. Images may be segmented, and the segments labelled, before input to the neural network. The output terrain class signal may be used to select a vehicle subsystem control mode. Terrain classes may comprise a dirt road or track, grass, mud, ruts, paved or metalled road, rock, sand, snow. Image classification may be performed in real time, updating dynamically as the landscape changes. The image sensor may comprise a stereo camera to determine three dimensional depth information or perspective.
Description
METHOD AND APPARATUS FOR CLASSIFIYING TERRAIN
TECHNICAL FIELD
The present disclosure relates to a method and apparatus for classifying terrain. Aspects of the invention relate to a system for classifying a terrain, a vehicle, a method of classifying terrain, a computer-implemented training method, and artificial neural network and computer readable instructions.
BACKGROUND
It is known to provide a control system in a vehicle to select a subsystem control mode in dependence on a class (or type) of terrain being traversed by the vehicle. The control system may, for example, receive state indicators providing an indication of an operating state of the vehicle. The state indicators are typically received from sensors provided on-board the vehicle to monitor operating parameter(s) of the vehicle. By analysing the state indicators, the control system can identify a subsystem control mode which is suitable for traversing the prevailing terrain. A potential limitation of this approach is that the control system is re-active and can only use vehicle inputs to infer the appropriate subsystem control mode.
It is an aim of the present invention to address one or more of the disadvantages associated with the prior art.
SUMMARY OF THE INVENTION
Aspects and embodiments of the invention provide a system for classifying a section of terrain, a vehicle, a method for classifying a section of terrain, a computer-implemented method of training an artificial neural network to classify a section of terrain, an artificial neural network, and computer readable instructions as claimed in the appended claims.
According to an aspect of the present invention there is provided a system for classifying a section of terrain, the system comprising one or more processors collectively configured to: receive image data representing an image which comprises a section of terrain, the image data being captured by at least one optical imaging sensor provided on the vehicle; input at least a portion of the image data into an artificial neural network, the artificial neural network being configured to determine a plurality of image classification probabilities, the plurality of image classification probabilities each indicating a probability that the section of terrain is a respective one of a plurality of terrain classes; determine a terrain class of the section of terrain in dependence on the plurality of image classification probabilities; and output a terrain class signal indicating the determined terrain class of the section of terrain.
The system is configured to classify the section of terrain as being one of the plurality of terrain classes. The classification of the section of terrain is performed in dependence on the analysis of the image data by the artificial neural network The terrain classes are predefined, for example comprising one or more of the following: a dirt road, grass, mud and ruts, (paved/metalled) road, rock, sand and snow. The image data is typically captured by one or more imaging sensors provided onboard the vehicle. The image data is 1 processed by the artificial neural network to generate the plurality of image classification probabilities indicating a likelihood that the section of terrain is each of the plurality of terrain classes. The processing of the image data may be performed at least substantially in real time. The system may classify the section of terrain dynamically, for example to reflect changes in the terrain class.
The image classification probabilities indicate the probability that the section of terrain comprises or consists of a respective one of the plurality of terrain classes. The terrain class of the section of terrain may be performed in dependence on the plurality of image classification probabilities. The determination of the terrain class of the section of terrain may comprise identifying the terrain class having the highest image classification probability. The terrain class signal indicates the determined terrain class of the section of terrain. At least in certain embodiments, the analysis of the image data by the artificial neural network enables the section of terrain to be classified pre-emptively. The section of terrain can be classified before the vehicle begins a traversal of the section of terrain displayed in the image. The section of terrain may, for example, be classified while the vehicle is stationary. The section of terrain may be classified pre-emptively. In other words, the terrain class may be predicted for a section of terrain disposed in front of or ahead of the vehicle (distinct from the section of terrain under the vehicle). At least in certain embodiments, the pre-emptive determination of the terrain class may be enable one or more vehicle subsystems to be pre-configured for traversal of the section of terrain. At least in certain embodiments, a subsystem control mode may be selected in dependence the determined terrain class. The subsystem control mode may configure one or more vehicle subsystems for traversing the section of terrain in front of or ahead of the vehicle.
The image may be captured by one or more optical imaging sensors. The one or more optical imaging sensors may be configured to detect electromagnetic radiation comprising or consisting of the portion of the electromagnetic spectrum that is visible to the human eye. The image captured by the optical imaging sensor may be referred to as a visible (optical) image. The one or more optical imaging sensors may be forward facing. The one or more optical imaging sensors may be configured to capture an image of a scene in front of the vehicle. The section of terrain may form at least a part of the scene. The one or more optical imaging sensors may be configured to sense visible light (i.e. electromagnetic radiation in the range which is visible to humans). The or each optical imaging sensor may comprise a camera. The camera may be in the form of a mono-camera or a stereo-camera. A stereo-camera may facilitate determination of three-dimensional information, such as a depth information or perspective of features present in the image.
The system may also determine a terrain classification probability in dependence on a signal(s) received from one or more other (non-imaging) sensors provided onboard the vehicle. The one or more sensors may, for example, provide an indication of an operating state of the vehicle. The signal(s) indicating the operating state of the vehicle may be supplied to the artificial neural networkto classify the section of terrain.
At least in certain embodiments, the control system may enable the image classification probabilities to be combined with a different number of vehicle subsystem control modes. For example, the control system may receive more image classification probabilities than there are vehicle subsystem control modes.
The control system comprises one or more controllers collectively comprising at least one electronic processor having an electrical input for receiving an input signal; and at least one memory device electrically coupled to the at least one electronic processor and having instructions stored therein; and wherein the at least one electronic processor is configured to access the at least one memory device and execute the instructions thereon so as to: receive the image data representing the image which comprises a section of terrain, the image data being captured by at least one optical imaging sensor provided on the vehicle; input at least a portion of the image data into an artificial neural network, the artificial neural network being configured to determine a plurality of image classification probabilities, the plurality of image classification probabilities each indicating a probability that the section of terrain is a respective one of a plurality of terrain class; determine the terrain class of the section of terrain in dependence on the plurality of image classification probabilities; and output the terrain class signal indicating the determined terrain class of the section of terrain.
The determination of the terrain class of the section of terrain may comprise determining which of the plurality of terrain classes is determined to have the highest image classification probability. The highest image classification probability represents the terrain classification determined to be most likely to be represented in the image.
The one or more processors collectively may be configured to process the image data using an image segmentation model. The image segmentation model may be configured to segment the image into a plurality of image segments. At least one of the plurality of image segments corresponds to the section of terrain to be classified in the image. The image segmentation model may process the image data to identify the or each image segment corresponding to the section of terrain to be classified. The system may be configured to process the or each segment representing the section of terrain to be classified. The system may discard or discount any segments of the image which do not represent the section of terrain to be classified. This may reduce the processing required by the artificial neural network to classify the section of terrain, thereby reducing the computational overhead. The image segmentation model may label the segments, for example to indicate whether the segments represent terrain or non-terrain features. The image segmentation model may, for example, segment the image into a first segment representing the section of terrain to be classified; and a second segment representing a non-terrain feature, such as a section of the sky. The section of terrain may be classified in dependence on the first segment in this example. The terrain class of the section of terrain may be performed without processing or analysing the second segment.
The system may be configured such that the image data input into the artificial neural network comprises or consists of the at least one image segment corresponding to the section of terrain to be classified. The artificial neural network may determine the image classification probabilities in dependence on analysis of the at least one image segment corresponding to the section of terrain to be classified. The image classification probability may be determined exclusively in dependence on the analysis of the at least one image segment corresponding to the section of terrain to be classified. This may reduce the processing 3 required by the artificial neural network to classify the section of terrain, thereby reducing the computational overhead.
According to a further aspect of the present invention there is provided a system for classifying a section of terrain, the system comprising one or more processors collectively configured to: receive image data representing an image which comprises a section of terrain, the image data being captured by at least one optical imaging sensor provided on the vehicle; and input at least a portion of the image data into an artificial neural network, the artificial neural network being configured to determine a plurality of image classification probabilities, the plurality of image classification probabilities each indicating a probability that the section of terrain is a respective one of a plurality of terrain classes. The system may determine the terrain class of the section of terrain in dependence on the plurality of image classification probabilities.
According to a further aspect of the present invention there is provided a control system comprising the system as described herein and a subsystem control mode selector. The subsystem control mode selector may be configured to select a subsystem control mode in dependence on the terrain class signal.
According to a further aspect of the present invention there is provided a vehicle comprising the control system described herein or the system described herein.
According to a further aspect of the present invention there is provided a method for classifying a section of terrain, the method comprising: receiving image data representing an image comprising a section of terrain, the image data being captured by at least one optical imaging sensor provided on the vehicle; inputting at least a portion of the image data into an artificial neural network configured to determine a plurality of image classification probabilities, the plurality of image classification probabilities each indicating a probability that the section of terrain is a respective one of a plurality of terrain class; determining a terrain class of the section of terrain in dependence on the plurality of image classification probabilities; and outputting a terrain class signal indicating the terrain class of the section of terrain.
The image data is analysed by the artificial neural network to classify the section of terrain as being one of the plurality of terrain classes. As described herein, the terrain classes may be predefined, for example comprising one or more of the following: a dirt road, grass, mud and ruts, (paved/metalled) road, rock, sand and snow. The image data may be captured by one or more imaging sensors provided onboard the vehicle.
The method comprises determining the plurality of image classification probabilities which indicate the likelihood that the section of terrain is each of the plurality of terrain classes. The method may comprise classifying the section of terrain dynamically, for example to reflect changes in the terrain class.
The method may comprise segmenting the image into a plurality of image segments, wherein at least one of the plurality of image segments corresponds to the section of terrain to be classified in the image. The image may be segmented by a segmentation model, for example.
The image data input into the artificial neural network may comprise or consist of the at least one image segment corresponding to the section of terrain to be classified. The image segments may be labelled. The at least one image segment determined to correspond to the section of terrain to be classified may, for example, be labelled as corresponding to the section of terrain. The artificial neural network may determine the or each image classification probability exclusively in dependence on the at least one image segment corresponding to the section of terrain to be classified. This may reduce the processing required by the artificial neural network to classify the section of terrain, thereby reducing the computational overhead.
According to a further aspect of the present invention there is provided a computer-implemented training method for training an artificial neural network to classify a section of terrain represented in an image; the method comprising receiving a plurality of training data sets, the training data sets comprising: a first image data representing a plurality of first images, each of the first images representing a section of terrain of a first terrain class, the first image data being labelled as being the first terrain class; a second image data representing a plurality of second images, each of the second images representing a section of terrain of a second terrain class, the second image data being labelled as being the first terrain class; training the artificial neural network to determine a first image classification probability indicating the probability that a section of terrain represented in a image is a first terrain class; and a second image probability indicating that the section of terrain represented in the image is a second terrain class.
According to a further aspect of the present invention there is provided an artificial neural network trained using the method described herein, wherein the artificial neural network is configured to: receive image data representing an image comprising a section of terrain; process the image data and determine a plurality of image classification probabilities, the plurality of image classification probabilities each indicating a probability that the section of terrain is a respective one of a plurality of terrain class; and output the plurality of image classification probabilities. The artificial neural network may be executed on a computational device comprising one or more electronic processors. According to a further aspect of the present invention there is provided computer readable instructions which, when executed by a computer, are arranged to implement the artificial neural network described herein.
According to a further aspect of the present invention there is provided a system for classifying a section of terrain, the system comprising one or more processors collectively configured to implement the artificial neural network described herein. The system may be configured to determine a terrain class of the section of terrain in dependence on the plurality of image classification probabilities.
According to a further aspect of the present invention there is provided computer readable instructions which, when executed by a computer, are arranged to perform a method described herein.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 shows a schematic representation of a vehicle incorporating a terrain classification system in accordance with an embodiment of the present invention; Figure 2 shows a schematic representation of a terrain classification system configured to process image data to generate a plurality of image classification probabilities; Figure 3 shows a schematic representation of a control system for controlling operation of a plurality of vehicle subsystems of the vehicle shown in Figure 1; Figure 4 is a block diagram representing a method of classifying terrain in accordance with an embodiment of the present invention; Figure 5 shows a first image comprising a section of terrain which is analysed by the terrain classification system to generate the displayed terrain classification probabilities; Figure 6 shows a second image comprising a section of terrain which is analysed by the terrain classification system to generate the displayed terrain classification probabilities; Figure 7 is a block diagram representing a computer-implemented method of training a machine learning algorithm for the terrain classification system according to an embodiment of the present invention; and Figure 8 illustrates pre-processing of the first image shown in Figure 5 to identify one or more segments which represent the section of terrain to be classified by the terrain classification system.
DETAILED DESCRIPTION
A system 1 and method 100 for classifying a section of terrain (denoted generally by the reference numeral ORT) in accordance with an embodiment of the present invention is described herein with reference to the accompanying Figures.
The system 1 is configured to be used in a vehicle 5. The vehicle 5 is a wheeled vehicle, such as an automobile, a utility vehicle or a sports utility vehicle. As shown in Figure 1, the vehicle 5 comprises four (4) wheels W1-W4. The vehicle 5 in the present embodiment is suitable for use both on-road and off-road. Each of the wheels W1-W4 may be selectively driven to propel the vehicle 5. The vehicle 5 comprises at least one electric drive unit 7 configured to drive the wheels W1-W4. The electric drive unit 7 is supplied with electrical energy stored in a traction battery 9. One or more inverter 11 is provided for converting the direct current (DC) from the traction battery 9 into alternating current (AC) which is supplied to the at least one electric drive unit 7. The vehicle 5 is a battery electric vehicle (BEV) in the present embodiment. It will be understood that the system 1 and the method 100 described herein are applicable to other types of vehicle 5, such as a hybrid electric vehicle (1-IEV), a plug-in hybrid electric vehicle (PI-IEV) or an internal combustion engine (ICE) vehicle.
The system 1 is configured to classify the terrain ORT on which the vehicle 5 is operating as one of a plurality of terrain classes TYP(n) (the suffix n is used herein to differentiate between the different terrain classes). As described herein, the terrain classes TYP(n) are predefined and different from each other. The terrain classes TYP(n) represent a range of different types of terrain that the vehicle 5 may encounter during normal operation. The terrain classes TYP(n) in the present embodiment comprise the following: a dirt road TYP(1), grass TYP(2), mud and ruts TYP(3), road TYP(4), rock TYP(5), sand TYP(6) and snow TYP(7). It will be understood that different terrain classes TYP(n) may be defined. In the present embodiment, seven (7) different terrain classes TYP(n) are defined, but it will be understood that less than or more than seven (7) terrain classes TYP(n) may be defined.
The system 1 is configured to monitor a region of terrain proximal to the vehicle 5, typically in front of the vehicle 5. The vehicle 5 comprises one or more imaging sensors 21 configured to capture image data IMD(n) representing an image IMG(n). The or each imaging sensor 21 comprise or consist of an optical imaging sensor 21. The or each optical imaging sensor 21 is configured to detect light in the portion of the electromagnetic spectrum that is visible to the human eye. The image IMG(n) captured by the optical imaging sensor may be referred to as a visible (optical) image IMG(n). The vehicle 5 in the present embodiment comprises one or more first imaging sensors 21 configured to capture first image data IMD(1) representing a first image IMG(1). The first imaging sensor 21 has a direct line-of-sight to the terrain ORT, preferably reducing or avoiding reflections (for example in a side mirror or rear-view mirror). The system 1 described herein may have a dedicated first imaging sensor 21 to capture the first image data IMD(1). It will be understood that the first imaging sensor 21 may also be shared with other vehicle systems. The first imaging sensor 21 in the present embodiment is mounted in an elevated position, for example at the top of a front windshield of the vehicle 5. The first imaging sensor 21 may be mounted in other locations on the vehicle 5. The first imaging sensor 21 may have different orientations/directions to the arrangements illustrated herein. The first image IMG(1) is a dynamic image which changes with respect to time. The vehicle 5 is described herein as comprising one said first imaging sensor 21, although it will be appreciated that this is merely illustrative. The first imaging sensor 21 in the present embodiment is an optical camera configured to detect visible light. The first imaging sensor 21 is configured to detect light in the portion of the electromagnetic spectrum that is visible to the human eye. The first image IMG(1) is a visible (optical) image IMG(1).
As illustrated in Figure 1, the first imaging sensor 21 has a first field of view FOV1. The first field of view FOV1 extends in front of the vehicle 5 such that the first image IMG(1) represents a scene to a front of the vehicle 5. The first imaging sensor 21 is a mono-camera. In a variant, the imaging sensor 21 may comprise a stereo camera for capturing stereo images, for example to facilitate determination of a distance (range) from the vehicle 5 to features represented in the first image IMG(1).
The system 1 comprises a terrain classification system 29 for classifying the terrain ORT. The terrain classification system 29 is configured to process at least a portion of the first image data IMD(1) received from the first imaging sensor 21 to classify the terrain ORT. The first image data IMD(1) represents the first image IMG(1) captured by the first imaging sensor 21. The terrain classification system 29 may classify the terrain ORT by analysing the first image data IMD(1) representing at least substantially all of the first image IMG(1). Alternatively, the terrain classification system 29 may classify the terrain ORT by processing a sub-set of the first image data IMD(1), for example corresponding to a segment of the first image IMG(1).
The terrain classification system 29 in the present embodiment implements an artificial neural network ANN to classify the terrain class TYP(n). At least in certain embodiments, the artificial neural network is trained using a supervised learning technique. In the present embodiment, the artificial neural network is a convolutional neural network (CNN). The convolutional neural network (CNN) may be trained using image data representing off-road terrain as a primary source. Alternatively, transfer learning may be used to refine a deep convolutional neural network (DCNN) for use in the system 1 to classify the terrain ORT. The deep convolutional neural network (DCNN) may be pre-trained on general image data which is not specific to terrain classification. A classifier is then applied to the deep convolutional neural network (DCNN). The classifier is a convolutional neural network (CNN) trained using image data representing off-road terrain as a primary source. It has been determined that the combination of the deep convolutional neural network (DCNN) and a classifier is particularly effective. The artificial neural network ANN in the present embodiment comprises a deep convolutional neural network (DCNN) and a dedicated classifier. Other techniques may be used to train the artificial neural network. For example, the artificial neural network may be trained using unsupervised learning techniques, such as competitive learning.
The artificial neural network ANN is configured to classify the terrain ORT represented in the first image IMG(1) as being one of the plurality of predefined terrain classes TYP(n). The artificial neural network ANN calculates an image classification probability imp(n) for each of the plurality of terrain classes TYP(n). The deep convolutional neural network (DCNN) processes the first IMG(1) to identify features therein. The deep convolutional neural network (DCNN) calculates the image classification probability imp(n) for each of the plurality of terrain classes TYP(n) in dependence on the identified features. Each image classification probability imp(n) is calculated in the range negative one (-1) to positive one (+ 1) in the present embodiment. Other ranges may be used to define the image classification probabilities imp(n). Each of the plurality of image classification probabilities imp(n) indicate the likelihood that the terrain ORT comprises (or is predominantly composed of) a respective one of the plurality of predefined terrain classes TYP(n). The terrain ORT is classified as the terrain class TYP(n) having the highest image classification probability imp(n).
In the present embodiment, the artificial neural network ANN calculates the image classification probability imp(n) that the terrain ORT represented in the first image IMG(1) is each of the predefined terrain classes TYP(n). The artificial neural network ANN calculates each of the following image classification probabilities imp(n) in respect of the terrain ORT shown in the first image IMG(1): 1. A first image classification probability imp(1) indicates a probability that the terrain ORT comprises or consists of a dirt road TYPO).
2. A second image classification probability imp(2) indicates a probability that the terrain ORT comprises or consists of grass TYP(2).
3. A third image classification probability imp(3) indicates a probability that the terrain ORT comprises or consists of mud and ruts TYP(3).
4. A fourth image classification probability imp(4) indicates a probability that the terrain ORT comprises or consists of road TYP(4), for example, having a paved or metalled road surface.
5. A fifth image classification probability imp(5) indicates a probability that the terrain ORT comprises or consists of rock TYP(5).
6. A sixth image classification probability imp(6) indicates a probability that the terrain ORT comprises or consists of sand TYP(6).
7. A seventh image classification probability imp(7) indicates a probability that the terrain ORT comprises or consists of snow TYP(7).
The terrain classification system 29 classifies the terrain ORT as being the terrain class TYP(n) having the highest image classification probability imp(n). The artificial neural network ANN updates each of the image classification probabilities imp(n) with respect to time. Thus, the image classification probabilities imp(n) are updated dynamically as the terrain ORT represented by the image IMG(1) changes, for example during a journey. The classification of the terrain ORT may change dynamically to reflect changes in the terrain ORT. A computer-implemented training method and system for training the artificial neural network ANN in accordance with an embodiment of the present invention is described herein.
As shown in Figure 2, the terrain classification system 29 comprises one controller 33, although it will be appreciated that this is merely illustrative. The controller 33 comprises processing means 35 and memory means 37. The processing means 35 may be one or more electronic processing devices 35 which operably executes computer-readable instructions. The memory means 37 may be one or more memory devices 37. The memory means 37 is electrically coupled to the processing means 35. The memory means 37 is configured to store instructions, and the processing means 35 is configured to access the memory means 37 and execute the instructions stored thereon. When executed, the instructions cause the controller 33 to perform the method(s) described herein. The controller 33 comprises an input means 39 and an output means 41. The input means 39 comprises an electrical input 39 of the controller 33. The input means 39 is configured to receive the first image data IMD(1) representing the first image IMG(1). The input means 39 may optionally be configured to receive the second image data IMD(2) representing the second image IMG2. The output means 41 may comprise an electrical output 41. The output 41 is arranged to output a terrain class signal SG1. The terrain class signal SG1 is an electrical signal providing an indication of the terrain class TYP(n) identified by the artificial neural network ANN. Alternatively, or in addition, the output 41 may output discrete values indicating each of the plurality of image classification probabilities imp(n) calculated for the respective terrain classes TYP(1).
The vehicle 5 in the present embodiment comprises a control system 51 for controlling operation of a plurality of vehicle subsystems VSS(n). The vehicle subsystems VSS(n) include, but are not limited to, a 9 propulsion (or engine) management system VSS(1), a transmission system VSS(2), a steering system VSS(3), a brakes system VSS(4), a suspension system VSS(5) and a differential system VSS(5). Although five vehicle subsystems VSS(n) are illustrated as being under the control of the control system 51, in practice a greater number of vehicle subsystems may be included on the vehicle 5 and may be under the control of the control system 51. At least some of the vehicle subsystems VSS(n) may communicate with the control system 51 to feedback information on a current (instantaneous) operating status or condition. The vehicle subsystems VSS(n) are configurable to adjust the dynamic operation of the vehicle 5. The control system 51 is configured to control the vehicle subsystems VSS(n) in dependence on a selected one of a plurality of subsystem control modes SSM(n). The subsystem control modes SSM(n) are selected automatically or semi-automatically by the control system 51. One of the predefined subsystem control modes SSM(n) is selected to provide appropriate control of the vehicle subsystems VSS(n). The subsystem control modes SSM(n) in the present embodiment include the following: 1. A first subsystem control mode SSM(1) in the form of a comfort subsystem control mode suitable for traversing terrain comprising a paved (metalled) road, motorway or regular roadway.
2. A second subsystem control mode SSM(2) in the form of a grass/gravel/snow subsystem control mode (GGS mode) suitable for traversing terrain comprising or consisting of grass, gravel or snow terrain; 3. A third subsystem control mode SSM(3) in the form of a mud/ruts subsystem control mode (MR mode) for traversing terrain comprising or consisting of mud and/or rutted terrain; 4. A fourth subsystem control mode SSM(4) in the form of a sand subsystem control mode suitable for traversing terrain comprising or consisting of sand (or deep, soft snow); 5. A fifth subsystem control mode SSM(5) in the form of a rock subsystem control mode suitable for traversing terrain comprising or consisting of rocky terrain such as a boulder field.
The control system 51 comprises a subsystem controller 53 for selecting one of the plurality of subsystem control modes SSM(n). The subsystem controller 53 is configured to output one or more control signals to control operation of the or each vehicle subsystem VSS(n) in a manner appropriate to the driving condition, such as the terrain, on which the vehicle 5 is operating (referred to as the terrain condition). The selection of the subsystem control mode SSM(n) is dependent on the terrain class signal SG1 received from the terrain classification system 29. As shown in Figure 3, the control system 51 comprises one controller 53, although it will be appreciated that this is merely illustrative. The controller 53 comprises processing means 55 and memory means 57. The processing means 55 may be one or more electronic processing devices 55 which operably executes computer-readable instructions. The memory means 57 may be one or more memory devices 57. The memory means 57 is electrically coupled to the processing means 55. The memory means 57 is configured to store instructions, and the processing means 55 is configured to access the memory means 57 and execute the instructions stored thereon. When executed, the instructions cause the controller 53 to perform the method(s) described herein. The controller 53 comprises an input means 59 and an output means 61. The input means 59 comprises an electrical input 59 of the controller 53. The input means 59 is configured to receive the terrain class signal SG1 received from the terrain classification system 29. The output means 61 may comprise an electrical output 61. The output 61 is arranged to output a subsystem control signal SG2. The subsystem control signal SG2 is an electrical signal indicating the selected subsystem control mode SSM(n).
In use, the artificial neural network ANN is configured to process the first image data IMD(1) to determine the image classification probability imp(n) associated with each of the predefined terrain classes TYP(n).
This enables the terrain ORT to be classified through analysis of the first image IMG(1). Advantageously, the artificial neural network ANN enables the classification of an upcoming section of terrain ORT (i.e. a section of terrain ORT in front of the vehicle 5) to be predicted in advance. This differs from prior art systems which are typically re-active, for example using measured dynamic characteristics to classify the terrain ORT currently being traversed by the vehicle 5. Advantageously, a predictive system may enable the one or more vehicle subsystems VSS(n) to be pre-configured before the vehicle 5 reaches the upcoming section of terrain ORT. This may reduce (or avoid) the need to change the selected subsystem control mode.
Figure 4 illustrates a method 100 according to an embodiment of the invention. The method 100 is a method of classifying a terrain ORT. The method 100 may be performed by the system 1 described herein. In particular, the memory 37 may comprise computer-readable instructions which, when executed by the processor 35, perform the method 100 according to an embodiment of the invention.
The method 100 will be described with reference to the vehicle 5 situated in a section of terrain ORT. The method 100 is initiated (BLOCK 105). The method 100 comprises receiving first image data IMD(1) representing a first image IMG(1) of the off-road terrain (BLOCK 110). The first image data IMD(1) is captured by the first imaging sensor 21 provided on the vehicle 5 in the present embodiment. The first image IMG(1) comprises a scene in front of the vehicle 5. The first image IMG(1) is processed by the artificial neural network ANN (BLOCK 115). The processing of the first image IMG(1) comprises calculating an image classification probability imp(n) for each of the plurality of predefined terrain classes TYP(n). The artificial neural network ANN determines which one of the plurality of terrain classes TYP(n) is most likely to correspond to the terrain ORT represented in the first image IMG(1). The image classification probabilities imp(n) determined for each of the plurality of terrain classes TYP(n) are compared (BLOCK 120). The terrain class TYP(n) having the highest image classification probability imp(n) is identified (BLOCK 125). The terrain ORT is classified as being the terrain class TYP(n) identified as having the highest image classification probability imp(n) (BLOCK 130). The terrain class signal SG1 is output identifying the terrain class TYP(n) of the terrain ORT (BLOCK 135). The terrain class signal SG1 may be output to the control system 51. The control system 51 is configured to select one of the plurality of subsystem control modes SSM(n) in dependence on the terrain class signal SG1 (BLOCK 140). The control system 51 outputs the subsystem control signal SG2 to control operation of one or more of the vehicle subsystems VSS(n) (BLOCK 145). The method 100 continues to update dynamically the classification probabilities and the terrain classification (LOOP 150). The method 100 ends when the vehicle 5 is switched off, for example ignition OFF (BLOCK 155).
A first example of the operation of the artificial neural network ANN to classify the terrain ORT will now be described. A first image IMG(1) comprising the terrain ORT is captured by the first imaging sensor 21. The first image IMG(1) is shown in Figure 5 by way of example. The terrain ORT comprises a road surface in front of the vehicle 5. The vehicle 5 comprises a bonnet 13 which is visible in a lower portion of the first image IMG(1) as a dark (black) region. A snow field is visible extending from the edge of the road. A series of buildings having snow-covered roofs are visible between the snow field and a row of trees in the background of the first image IMG(1). There are no formal markings, indicia or street furniture visible in relation to the road surface. The first image data IMD(1) representing the first image IMG(1) is supplied to the artificial neural network ANN for processing. The artificial neural network ANN processes the first image IMG(1) and calculates an image classification probability imp(n) for each of the plurality of predefined terrain classes TYP(n). The terrain class TYP(n) most likely to correspond to the terrain ORT represented in the first image IMG(1) is determined in dependence on the image classification probabilities imp(n). The calculated image classification probability imp(n) for each of the terrain classes TYP(n) are illustrated by separate bar charts in the top left region of the first image IMG(1) shown in Figure 5. The image classification probabilities imp(n) are represented (from top to bottom) as follows: dirt road TYP(1), grass TYP(2), mud and ruts TYP(3), road TYP(4), rock TYP(5), sand TYP(6) and snow TYP(7). In the illustrated example, the seventh image classification probability imp(7) calculated indicate the probability that the terrain class TYP(n) is snow TYP(7) is significantly higher than all of the other terrain classes TYP(n). The artificial neural network ANN classifies the terrain ORT as being snow TYP(7). The terrain class signal SG1 is output to identify the terrain class TYP(n) as being snow TYP(7).
A second example of the operation of the artificial neural network ANN to classify the terrain ORT will now be described. A first image IMG(1) comprising the terrain ORT is captured by the first imaging sensor 21.
The first image IMG(1) is shown in Figure 6 by way of example. The terrain ORT comprises an un-metalled off-road surface comprising a dirt track (also known as a dry-weather road or an earth road) formed of two parallel tracks separated from each other by a central vegetation line. The dirt track is bounded on each side by trees. A single sign in the form of an arrow is visible in the first image IMG(1). The first image data IMD(1) representing the first image IMG(1) is supplied to the artificial neural network ANN for processing.
The artificial neural network ANN processes the first image IMG(1) and calculates an image classification probability imp(n) for each of the plurality of predefined terrain classes TYP(n). The terrain class TYP(n) most likely to correspond to the terrain ORT represented in the first image IMG(1) is determined in dependence on the image classification probabilities imp(n). The calculated image classification probability imp(n) for each of the terrain classes TYP(n) are illustrated by separate bar charts in the top left region of the first image IMG(1) shown in Figure 6. The image classification probabilities imp(n) are represented (from top to bottom) as follows: dirt road TYP(1), grass TYP(2), mud and ruts TYP(3), road TYP(4), rock TYP(5), sand TYP(6) and snow TYP(7). In the illustrated example, the first image classification probability imp(1) corresponding to a dirt road TYP(1) and the third image classification probability imp(3) corresponding mud and ruts TYP(3) are positive values (+VE), but the other classification probabilities are negative values (-VE). The third image classification probability imp(3) is the largest value indicating that the artificial neural network ANN determines that the terrain ORT is most likely to be mud and ruts TYP(3). The artificial neural network ANN classifies the terrain ORT as being mud and ruts TYP(3). The terrain class signal SG1 is output to identify the terrain class TYP(n) as being mud and ruts TYP(3).
The artificial neural network ANN is trained using a machine learning algorithm MLA. The machine learning algorithm MLA is computer-implemented, for example by one or more processors. In the present embodiment, the artificial neural network ANN is trained using a supervised training method. The training is performed using a computer-implemented method to process a plurality of training data sets. The or each training data set may, for example, comprise image data IMD representing an image IMG. The or each training data set is annotated (or labelled) to identify the or each terrain class TYP(n) represented in the image IMG. By way of example, the first image IMG(1) shown in Figure 5 would be annotated to indicate that the terrain ORT is snow TYP(7); and the first image IMG(1) shown in Figure 6 would be annotated to indicate that the terrain ORT is mud and ruts TYP(3). The image IMG represented by the image data IMD may comprise more than one terrain class TYP(n) of the terrain ORT. The training data may be annotated to indicate each of the different terrain classes TYP(n) of the terrain ORT represented in the image IMG.
One or more of the plurality of training data sets may be specific to a particular terrain class TYP(n). The artificial neural network ANN is preferably trained using image data IMD captured by an imaging sensor having a similar position, direction and orientation to the first imaging sensor 21 provided on the vehicle 5. The training of the artificial neural network ANN using image data IMD more directly comparable to that generated by the first imaging sensor 21 can provide improved accuracy in predicting the terrain class TYP(n).
A computer-implemented training method 200 is illustrated in Figure 7. The method 200 comprises supplying a plurality of the training data sets to the machine learning algorithm MLA. The machine learning algorithm MLA processes the training data sets to generate the artificial neural network ANN. The training data sets comprise a set of first image data IMD(1) representing first images IMG(1); and a set of second image data IMD(2) representing second images IMG2. As illustrated in Figure 7, the training method 200 may comprise processing more than two sets of training data. The training data preferably represents a distribution of image data IMD(n) across each of the terrain classes TYP(n) to be classified. The distribution of image data IMD(n) across each of the terrain classes TYP(n) is preferably even. The training data may comprise a separate set of image data IMD(n) each of the terrain classes TYP(n). The training of the artificial neural network ANN should be performed in dependence on data from each of the sets of image data IMD(n) at the same time. Alternatively, the training data may be generated by combining image data IMD(n) from two or more of the data sets. The training data sets are annotated to provide an indication of the form of the terrain class TYP(n) of the terrain ORT represented in the first and second images IMG(1), IMG2. The first image data IMD(1) represents a plurality of the first images IMG(1), wherein the first images IMG(1) comprise or consist of a first terrain class TYP(1) of terrain. At least some of the first images IMG(1) in the set of first image data IMD(1) may represent a terrain ORT which is exclusively of the first terrain class TYP(1). Alternatively, or in addition, at least some of the first images IMG(1) in the set of first image data IMD(1) may represent a terrain ORT which is exclusively of other terrain classes TYP(n) (i.e. not the first terrain class TYPO)). The second image data IMD(2) represents a plurality of the second images IMG2, wherein the second images IMG2 comprise or consist of a second terrain class TYP(2). At least some of the first images IMG(1) in the set of second image data IMD(2) may represent a terrain ORT which is exclusively of the second terrain class TYP(2). Alternatively, or in addition, at least some of the first images IMG(1) in the set of second image data IMD(2) may represent a terrain ORT which is exclusively of other terrain classes TYP(n) (i.e. not the second terrain class TYP(2)). The training data sets may comprise a set of third image data IMD3 representing third images IMG3 representing terrain ORT comprising more than one terrain class TYP(n), for example comprising both the first and second terrain classes TYP(1), TYP(2). 13 The artificial neural network ANN is trained using the plurality of training data sets to differentiate between the first and second terrain classes TYP(1), TYP(2). The training is performed in respect of each of the plurality of terrain classes TYP(n). The resulting artificial neural network ANN can classify the terrain ORT as being one of the plurality of terrain classes TYP(n). At least in certain embodiments, the artificial neural network ANN may differentiate between different terrain classes TYP(n) in the same image IMG.
The terrain classification system 29 has been described herein as processing the first image data IMD(1) representing at least substantially all of the first image IMG(1). In a variant, the terrain classification system 29 may classify the terrain ORT by processing a sub-set of the first image data IMD(1), for example corresponding to a segment of the first image IMG(1). By reducing the amount of image data IMD(1) processes by the artificial neural network ANN, the computational load for classification of the terrain ORT may be reduced. A segmentation model SGM may optionally be provided to pre-process the image data IMD(1) to identify a segment of the first image IMG(1) to be classified by the terrain classification system 29. The segmentation model SGM optional and is shown schematically in Figure 2 by way of example. The segmentation model SGM may be implemented in the terrain classification system 20 or in a separate image processing system (not shown). The segmentation model SGM is configured to segment each image IMG(n) into a plurality of semantic areas IMS-n. The segmentation model SGM in the present embodiment is configured to segment the first image IMG(1) into a plurality of first semantic areas IMS-1. The segmentation model SGM identifies parts of the first image IMG(1) having the same semantic classification.
The or each first image segment IMS-1 represents a semantic area within the first image IMG(1). The terrain ORT to be classified by the artificial neural network ANN may form only a part or a sub-set of the first image IMG(1). The segmentation model SGM is configured to identify the first image segments IMS-1 corresponding to the terrain ORT. As shown schematically in Figure 4, the method 100 may optionally comprise receiving the one or more first image segments IMS-1 corresponding to the terrain ORT from the segmentation model SGM. The classification of the terrain ORT may be performed in respect of the one or more first image segments IMS-1 corresponding to the terrain ORT.
The segmentation model SGM is configured to segment the first image IMG(1) on a per-pixel basis such that the segmentation is performed at a pixel level. The process comprises classifying each pixel in the first image IMG(1) as having one of a plurality of semantic classifications. The pixels in the first IMG(1) classified as having the same semantic classification are identified as belonging to the same first image semantic area IMS(n). Alternatively, the segmentation model SGM may segment the first image IMG(1) based on groups or clusters of pixels.
An example of the pre-processing of the first image data IMD(1) to segment the first image IMG(1) is shown in Figure 8. The segmentation model SGM segments the first image IMG(1) into a first segment IMS-1 corresponding to the bonnet (or hood) 13 of the vehicle 5; and a second segment IMS-2 corresponding to the snow field beyond the section of road. The subsequent classification of the terrain ORT is performed in respect of the first and second segments IMS-1, IMS-2.
In the embodiment(s) described herein, the artificial neural network ANN is implemented on-board the vehicle 5. In use, the artificial neural network ANN is implemented locally by the terrain classification system 14 29. In a variant, the artificial neural network ANN could be implemented off-board the vehicle 5. The artificial neural network ANN may be implemented by a remote server configured to receive the first image data IMD(1) from the vehicle 4 over a wireless communication network. The terrain class signal SG1 may be transmitted to the vehicle 5 to control one or more vehicle systems.
It will be appreciated that various changes and modifications can be made to the present invention without departing from the scope of the present application.
Claims (13)
- CLAIMS1. A system for classifying a section of terrain, the system comprising one or more processors collectively configured to: receive image data representing an image which comprises a section of terrain, the image data being captured by at least one optical imaging sensor provided on the vehicle; input at least a portion of the image data into an artificial neural network, the artificial neural network being configured to determine a plurality of image classification probabilities, the plurality of image classification probabilities each indicating a probability that the section of terrain is a respective one of a plurality of terrain classes; determine a terrain class of the section of terrain in dependence on the plurality of image classification probabilities; and output a terrain class signal indicating the determined terrain class of the section of terrain.
- 2. A system as claimed in claim 1, wherein determining the terrain class of the section of terrain comprises determining which of the plurality of terrain classes is determined to have the highest image classification probability.
- 3. A system as claimed in claim 1 or claim 2, wherein the one or more processors is collectively configured to: process the image data using an image segmentation model to segment the image into a plurality of image segments, wherein at least one of the plurality of image segments corresponds to the section of terrain to be classified in the image.
- 4. A system as claimed in claim 3, wherein the image data input into the artificial neural network comprises or consists of the at least one image segment corresponding to the section of terrain to be classified.
- 5. A control system comprising the system of any one of the preceding claim and a subsystem control mode selector, wherein the subsystem control mode selector is configured to select a subsystem control mode in dependence on the terrain class signal.
- 6. A vehicle comprising the control system of claim 5 or the system of any one of claims 1 to 4.
- 7. A method for classifying a section of terrain, the method comprising: Receiving image data representing a image comprising a section of terrain, the image data being captured by at least one optical imaging sensor provided on the vehicle; inputting at least a portion of the image data into an artificial neural network configured to determine a plurality of image classification probabilities, the plurality of image classification probabilities each indicating a probability that the section of terrain is a respective one of a plurality of terrain class; determining a terrain class of the section of terrain in dependence on the plurality of image classification probabilities; and outputting a terrain class signal indicating the terrain class of the section of terrain.
- 8. A method as claimed in claim 7 comprising: processing the image data) using a segmentation model to segment the image into a plurality of image segments, wherein at least one of the plurality of image segments corresponds to the section of terrain to be classified in the image.
- 9. A method as claimed in claim 8, wherein the image data input into the artificial neural network comprises or consists of the at least one image segment corresponding to the section of terrain to be classified.
- 10. A computer-implemented training method for training an artificial neural network to classify a section of terrain represented in an image; the method comprising receiving a plurality of training data sets, the training data sets comprising: a first image data representing a plurality of first images, each of the first images representing a section of terrain of a first terrain class, the first image data being labelled as being the first terrain class; a second image data representing a plurality of second images, each of the second images representing a section of terrain of a second terrain class, the second image data being labelled as being the first terrain class; training the artificial neural network to determine a first visible image classification probability indicating the probability that a section of terrain represented in an image is a first terrain class; and a second image probability indicating that the section of terrain represented in the image is a second terrain class.
- 11. An artificial neural network trained using the method claimed in claim 10, wherein the artificial neural network is configured to: receive image data representing an image comprising a section of terrain; process the image data and determine a plurality of image classification probabilities, the plurality of image classification probabilities each indicating a probability that the section of terrain is a respective one of a plurality of terrain class; and output the plurality of image classification probabilities.
- 12. A system for classifying a section of terrain, the system comprising one or more processors collectively configured to implement the artificial neural network claimed in claim 11, wherein the system is configured to determine a terrain class of the section of terrain in dependence on the plurality of image classification probabilities.
- 13. Computer readable instructions which, when executed by a computer, are arranged to perform a method according to any of claims 7 to 10.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2401887.1A GB2638006A (en) | 2024-02-12 | 2024-02-12 | Method and apparatus for classifiying terrain |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2401887.1A GB2638006A (en) | 2024-02-12 | 2024-02-12 | Method and apparatus for classifiying terrain |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| GB202401887D0 GB202401887D0 (en) | 2024-03-27 |
| GB2638006A true GB2638006A (en) | 2025-08-13 |
Family
ID=90354719
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB2401887.1A Pending GB2638006A (en) | 2024-02-12 | 2024-02-12 | Method and apparatus for classifiying terrain |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2638006A (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130080359A1 (en) * | 2010-07-06 | 2013-03-28 | Julia Vivien Will | Assisting vehicle guidance over terrain |
| JP6299427B2 (en) * | 2013-05-31 | 2018-03-28 | トヨタ自動車株式会社 | Scene estimation method and scene estimation apparatus |
| US20210357648A1 (en) * | 2019-02-15 | 2021-11-18 | Rutgers, The State University Of New Jersey | Image processing neural network systems and methods with scene understanding |
| US20230013451A1 (en) * | 2021-02-04 | 2023-01-19 | Tencent Technology (Shenzhen) Company Limited | Information pushing method in vehicle driving scene and related apparatus |
-
2024
- 2024-02-12 GB GB2401887.1A patent/GB2638006A/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130080359A1 (en) * | 2010-07-06 | 2013-03-28 | Julia Vivien Will | Assisting vehicle guidance over terrain |
| JP6299427B2 (en) * | 2013-05-31 | 2018-03-28 | トヨタ自動車株式会社 | Scene estimation method and scene estimation apparatus |
| US20210357648A1 (en) * | 2019-02-15 | 2021-11-18 | Rutgers, The State University Of New Jersey | Image processing neural network systems and methods with scene understanding |
| US20230013451A1 (en) * | 2021-02-04 | 2023-01-19 | Tencent Technology (Shenzhen) Company Limited | Information pushing method in vehicle driving scene and related apparatus |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202401887D0 (en) | 2024-03-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112818910B (en) | Vehicle gear control method and device, computer equipment and storage medium | |
| US9415779B2 (en) | Vehicle control system and method | |
| US11747169B2 (en) | Real-time updates to maps for autonomous navigation | |
| JP7604390B2 (en) | Extending autonomous driving capabilities into new territories | |
| CN101950350B (en) | Clear path detection using a hierachical approach | |
| CN114475573B (en) | Fluctuating road condition identification and vehicle control method based on V2X and vision fusion | |
| CN115880658B (en) | Early warning method and system for lane departure of automobile in night scene | |
| JP4762491B2 (en) | Method for detecting road bends and system for carrying out this method | |
| CN104902261A (en) | Device and method for road surface identification in low-definition video streaming | |
| CN116588078A (en) | Vehicle control method, device, electronic equipment and computer readable storage medium | |
| CN114599567A (en) | Vehicle cluster tracking system | |
| US20250074464A1 (en) | Assistance system for use on the road surface | |
| CN113627608A (en) | Visual behavior guided object detection | |
| US20240104940A1 (en) | Method of Classifying a Road Surface Object, Method of Training an Artificial Neural Network, and Method of Operating a Driver Warning Function or an Automated Driving Function | |
| WO2020160927A1 (en) | Vehicle control system and method | |
| US20230368547A1 (en) | Mitigation strategies for lane marking misdetection | |
| GB2638006A (en) | Method and apparatus for classifiying terrain | |
| US20240185613A1 (en) | Object detection system | |
| GB2584383A (en) | Vehicle control system and method | |
| GB2638008A (en) | Terrain classification method and apparatus | |
| US12488558B1 (en) | Systems and methods for encoding sensor data tagged with geolocation data | |
| JP7776554B2 (en) | External environment recognition system, vehicle control device, roadway recognition method, and program | |
| WO2025172190A1 (en) | Subsystem control mode selection method and apparatus | |
| HK40043525A (en) | Vehicle gear control method, device, computer equipment and storage medium | |
| HK40043525B (en) | Vehicle gear control method, device, computer equipment and storage medium |