US20210056432A1 - Method for training an artificial neural generator network, method for training an artificial neural discriminator network, and test unit - Google Patents
Method for training an artificial neural generator network, method for training an artificial neural discriminator network, and test unit Download PDFInfo
- Publication number
- US20210056432A1 US20210056432A1 US16/999,533 US202016999533A US2021056432A1 US 20210056432 A1 US20210056432 A1 US 20210056432A1 US 202016999533 A US202016999533 A US 202016999533A US 2021056432 A1 US2021056432 A1 US 2021056432A1
- Authority
- US
- United States
- Prior art keywords
- artificial neural
- parameter set
- network
- motor vehicle
- discriminator network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/06—Improving the dynamic response of the control system, e.g. improving the speed of regulation or avoiding hunting or overshoot
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3692—Test management for test results analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3698—Environments for analysis, debugging or testing of software
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G06N3/0454—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0499—Feedforward networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
Definitions
- the present invention relates to a computer-implemented method for training an artificial neural generator network of generative adversarial networks for the purpose of approximating second test results from an identified subset of first test results of a virtual test of a device for at the least partial autonomous guidance of a motor vehicle.
- the present invention also relates to a computer-implemented method for training an artificial neural discriminator network of generative adversarial networks for the purpose of distinguishing a first parameter set of driving situation parameters from a second parameter set of driving situation parameters of a virtual test of a device for at the least partial autonomous guidance of a motor vehicle.
- the present invention further relates to a test unit for approximating second test results from an identified subset of first test results of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle, using generative adversarial networks which are configured to train an artificial neural generator network.
- the present invention further relates to a computer program and a computer-readable data carrier.
- Driver assistance systems e.g. an adaptive speed controller and/or functions for highly automated driving, may be verified and validated with the aid of various checking mechanisms.
- hardware-in-the-loop methods, software-in-the-loop methods, simulations and/or test drives may be used.
- DE 10 2017 200 180 A1 specifies a method for verifying and/or validating a vehicle function, which is provided for autonomously guiding a vehicle in the longitudinal and/or transverse direction(s).
- the method comprises an ascertainment, based on surroundings data relating to surroundings of the vehicle, a test control instruction of the vehicle function to an actuator of the vehicle, the test control instruction not being implemented by the actuator.
- the method also comprises a simulation, based on surroundings data and using a traffic participant model relating to at least one traffic participant in the surroundings of the vehicle, a fictitious traffic situation, which would be present if the test control instruction had been implemented.
- the method further comprises a provision of test data relating to the fictitious traffic situation.
- the traffic function is passively operated in the vehicle to ascertain the test control instruction.
- the disadvantage of this method is that, to verify and/or validate the vehicle function, an actual operation of the vehicle is necessary to ascertain the necessary data.
- the object is achieved by a computer-implemented method for training an artificial neural generator network of generative adversarial networks, a computer-implemented method for training an artificial neural discriminator network of generative adversarial networks, a test unit, a computer program and a computer-readable data carrier, for example a non-transitory computer readable media.
- the invention relates to a computer-implemented method for training an artificial neural generator network of generative adversarial networks (GAN) for the purpose of approximating second test results from an identified subset of first test results of a virtual test of a device for at the least partial autonomous guidance of a motor vehicle.
- GAN generative adversarial networks
- the generative adversarial networks include the artificial neural generator network and an artificial neural discriminator network.
- the method comprises a provision of the artificial neural generator network, which approximates a first parameter set of driving situation parameters, made up of the surroundings parameters describing the surroundings of the motor vehicle and the ego parameters describing the status of the motor vehicle, based on randomly generated input data.
- the second test results are formed by the first parameter set.
- the method also comprises a provision of the artificial neural discriminator network, which receives the first parameter set of driving situation parameters approximated by the artificial neural generator network and a second parameter set, in particular a reference parameter set, of driving situation parameters provided by a data source and made up of the surroundings parameters describing the surroundings of the motor vehicle and ego parameters describing the status of the motor vehicle.
- the identified subset of first test results is formed by the second parameter set.
- the method furthermore comprises an outputting of a function value for evaluating the first parameter set or the second parameter set by the artificial neural discriminator network.
- the method comprises a training of the artificial neural generator network based on the evaluation of the first parameter set or the second parameter set by the artificial neural discriminator network.
- the artificial neural generator network may be advantageously trained to approximate parameter sets which are similar to the second parameter sets received from the data source by the artificial neural discriminator network.
- the second parameter sets provided to the artificial neural discriminator network by the data source may thus be reproduced by the artificial neural generator network within a predefined definition range.
- the second data sets received by the artificial neural discriminator network which may be generated by a simulation or an artificial neural network, are critical test results of interest from a total quantity of test results.
- test results relate to specific traffic situations used in scenario-based testing for validating, checking and/or verifying functions of a system or control unit for the at least partial autonomous guidance of a motor vehicle.
- scenarios are defined, which may be referred to as an abstraction of a traffic situation.
- One logical scenario is the abstraction of a traffic situation, including the road, the driving behavior and the surround traffic without any specific parameter values.
- a specific scenario of this type corresponds to an individual traffic situation in each case.
- the method according to the invention also comprises the step that, if the function value output by the artificial neural discriminator network is within a first value range, the artificial neural discriminator network determines that the evaluated parameter set has been approximated by the artificial neural generator network and, if the function value output by the artificial neural discriminator network is within a second value range, the artificial neural discriminator network determines that the evaluated parameter set has been provided by the data source.
- the artificial neural discriminator network us thus advantageously able to make a decision, based on the output function value, whether the evaluated parameter set was approximated by the generator network or provided by the data source, due to the division of the first value range and the second value range.
- the method according to the invention further comprises the step that a total value range comprising the first value range and the second value range is between 0 and 1, the total value range being divided into the first value range and the second value range adjacent to the first value range.
- the method according to the invention also comprises the step that the second parameter set of driving situation parameters provided by the data source is generated by an artificial neural network or by a simulation.
- the second parameter set of driving situation parameters may thus be advantageously obtained from a plurality of data sources or generated in different ways.
- the method according to the invention also comprises the step that the ego parameters include a velocity of the motor vehicle, and the surroundings parameters include a velocity of another motor vehicle and a distance between the motor vehicle and the other motor vehicle.
- a so-called cut-in scenario may be approximated using these parameters.
- the cut-in scenario may thus be referred to as a traffic situation, in which a highly automated or autonomous vehicle travels in a predetermined lane, and another vehicle having a reduced velocity compared to the ego vehicle cuts into the lane of the ego vehicle from another lane at a certain distance.
- the velocity of the ego vehicle and of the other vehicle, which is also referred to as the fellow vehicle, is constant. Since the velocity of the ego vehicle is higher than that of the fellow, the ego vehicle must be braked to avoid a collision between the two vehicles.
- critical traffic situations may thus be approximated by the method according to the invention within a predefined definition range of aforementioned parameters.
- the method according to the invention also comprises the step that the artificial neural generator network includes an input layer having 24 neurons and 5 hidden layers, a first hidden layer having 92 neurons and a dropout of 10%, a second hidden layer having 64 neurons and a dropout of 10%, a third hidden layer having 32 neurons and a dropout of 10%, a fourth hidden layer having 24 neurons without a dropout and a fifth hidden layer having 24 neurons without a dropout, and an output layer having 2 neurons for outputting the parameter set of driving situation parameters.
- Such a configuration of the artificial neural generator network is advantageously able to approximate a target function to be approximated with a high degree of accuracy.
- the method according to the invention also comprises the step that the artificial neural discriminator network includes an input layer having 128 neurons and 6 hidden layers, a first hidden layer having 128 neurons and a dropout of 20% and a second through sixth hidden layer having 80 neurons and a dropout of 10%, and an output layer having 1 neuron with a sigmoid activation function.
- Such a configuration of the artificial neural discriminator network advantageously enables the artificial neural discriminator network to be able to distinguish between the parameter sets generated by the artificial neural generator network and the parameter sets provided by the data source with a high degree of accuracy.
- the method according to the invention also comprises the step that the hidden layers of the artificial neural generator network and the artificial neural discriminator network use an ELU activation function and an Adam optimization method.
- the present invention also relates to a computer-implemented method for training an artificial neural discriminator network of generative adversarial networks for the purpose of distinguishing a first parameter set of driving situation parameters from a second parameter set of driving situation parameters of a virtual test of a device for at the least partial autonomous guidance of a motor vehicle.
- the method comprises a receipt, by the artificial neural discriminator network, of a first parameter set of driving situation parameters approximated by an artificial neural generator network, made up of surroundings parameters describing the surroundings of the motor vehicle and ego parameters describing the status of the motor vehicle.
- the method also comprises a receipt, by the artificial neural discriminator network, of a second parameter set, in particular a reference parameter set, of driving situation parameters provided by a data source and made up of surroundings parameters describing the surroundings of the motor vehicle and ego parameters describing the status of the motor vehicle.
- the method further comprises an approximation and outputting of a function value for evaluating the first parameter set or the second parameter set by the artificial neural network.
- the method includes a training of the artificial neural discriminator network based on an approximation error calculated by comparing the output function value with a predefined setpoint value.
- the artificial neural discriminator network is thus able to evaluate the first parameter set generated by the artificial neural generator network as well as the second parameter set provided by the data source and calculate an approximation error of the artificial neural discriminator network, based on the evaluation of the first parameter set or the second parameter set.
- the artificial neural discriminator network thus evaluates how similar are the parameter sets generated by the artificial neural generator network to the second parameter sets provided by the data source.
- the evaluation i.e. the function value output by the artificial neural discriminator network, has a numeric value, which may be, for example, between 0 and 1.
- An evaluation of 0 may indicate, for example, that the artificial neural discriminator network views the evaluated parameter set as having been generated by the artificial neural generator network. If the evaluation is equal to 1, the artificial neural discriminator network considers the evaluated parameter set as having been provided by the data source.
- the parameter sets provided by the data source may be viewed as real data, due to the fact that they may originate, for example, in a simulation or a real test operation of a motor vehicle.
- the parameter sets generated by the artificial neural generator network are viewed as having been artificially generated. The task of the artificial neural discriminator network is therefore to distinguish between real and artificially generated data.
- the role of the artificial neural generator network is to generate, based on feedback from the artificial neural discriminator network, parameter sets of driving situation parameters which correspond to the real data provided by the data source within a predefined definition range.
- the method according to the invention also comprises the step that the function value output by the artificial neural discriminator network is an actual value of the artificial neural discriminator network, the approximation error of the artificial neural discriminator network being constituted by a difference between the actual value and the setpoint value, and a weighting of the artificial neural discriminator network being adapted using the approximation error.
- the artificial neural discriminator network is thus advantageously trained to improve its evaluation result, whether the received parameter sets are parameter sets provided by the data source or parameter sets generated by the artificial neural generator network.
- the method according to the invention also comprises the step that the artificial neural generator network and the artificial neural discriminator network are alternately trained, the artificial neural discriminator network being trained more frequently than the artificial neural generator network.
- This procedure has positive effects on the training process, since the particular networks are thus trained in a training cycle, the artificial neural discriminator network being trained more frequently than the artificial neural generator network, i.e. a training cycle of the artificial neural discriminator network is thus longer than a training cycle of the artificial neural generator network.
- the discriminator network is therefore trained in a cycle for multiple steps before the generator network is trained for one step. The next training cycle then begins.
- the method according to the invention also comprises the step that the artificial neural discriminator network is trained using mini-batches of the parameter sets of driving situation parameters, the artificial neural discriminator network alternately receiving a mini-batch of the first parameter set of driving situation parameters approximated by the artificial neural generator network and a mini-batch of the second parameter set of driving situation parameters provided by the data source.
- mini-batches of the parameter sets of driving situation parameters for training the artificial neural discriminator network has the advantage that the occurrence of a mode collapse during the training of the generative adversarial networks may be avoided hereby.
- the training of the generative adversarial networks may thus be advantageously stabilized by using the provision of the parameter sets in the form of mini-batches.
- the invention furthermore relates to a test unit for approximating second test results from an identified subset of first test results of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle, using generative adversarial networks, which are configured to train an artificial neural generator network.
- the generative adversarial networks include an artificial neural generator network, which is configured to approximate a first parameter set of driving situation parameters made up of the surroundings parameters describing the surroundings of the motor vehicle and the ego parameters describing the status of the motor vehicles, based on randomly generated input data, the second test results being formed by the first parameter set.
- the generative adversarial networks include an artificial neural discriminator network, which is configured to receive the first parameter set of driving situation parameters approximated by the artificial neural generator network and a second parameter set, in particular a reference parameter set, of driving situation parameters provided by a data source and made up of the surroundings parameters describing the surroundings of the motor vehicle and ego parameters describing the status of the motor vehicle.
- the identified subset of first test results is formed by the second parameter set.
- the artificial neural discriminator network is configured to output a function value for evaluating the first parameter set or the second parameter set.
- the artificial neural discriminator network is also configured to train the artificial neural generator network based on the evaluation of the first parameter set or the second parameter set.
- Generative adversarial networks of this type have the advantage that a large number of critical test results, i.e. second test results from an identified subset of first test results of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle, for example a control unit, may be approximated.
- time or resource savings may be achieved hereby, since the applicable parameter sets may be approximated by the generative adversarial networks in an efficient manner, whereby, on the one hand, a time advantage over conventional simulations or a real test operation of the motor vehicle exists and, on the other hand, computing power demand is reduced.
- the invention also relates to a computer program, including program code, for the purpose of carrying out the method according to the invention for training the artificial neural generator network and/or the method for training the artificial neural discriminator network when the computer program is run on a computer.
- the invention further relates to a computer-readable data carrier, including program code of a computer program, for the purpose of carrying out the method for training the artificial neural generator network and/or the method for training the artificial neural discriminator network when the computer program is run on a computer.
- the features described herein of the generative adversarial networks which are configured to train the artificial neural generator network, are suitable for use with any type of component, whether software-based or hardware-based, i.e. software-in-the-loop or hardware-in-the-loop systems, simulations or components within the scope of test drives of the motor vehicle.
- the generative adversarial networks are disclosed within the scope of any vehicle type, for example an automobile, a commercial vehicle, a ship or an aircraft.
- FIG. 1 shows a block diagram of a test unit according to an exemplary embodiment of the invention
- FIG. 2 shows a flowchart of a method for training an artificial neural generator network of the generative adversarial networks
- FIG. 3 shows a flowchart of a method for training an artificial neural discriminator network of the generative adversarial networks
- FIG. 4 shows a flowchart of a partial aspect of the method for training the artificial neural discriminator network of the generative adversarial networks
- FIG. 5 shows a flowchart of a relationship between the training of the artificial neural generator network and the artificial neural discriminator network of the generative adversarial networks
- FIG. 6 shows a function diagram of a target function approximated by the artificial neural generator network.
- FIG. 1 shows a block diagram of a test unit according to an exemplary embodiment of the invention.
- Test unit 1 is configured to approximate second test results E 2 from an identified subset of first test results E 1 o a virtual test of a device for the at least partial autonomous guidance of a motor vehicle, using generative adversarial networks GAN.
- test unit 1 comprises generative adversarial networks GAN.
- the test unit preferably has a software-based design in the form of a software-in-the-loop system.
- the test unit may have, for example, a hardware-based design in the form of a hardware-in-the-loop system.
- Generative adversarial networks GAN include artificial neural generator network K 1 and artificial neural discriminator network K 2 and are configured to train both artificial neural generator network K 1 and artificial neural discriminator network K 2 .
- the computer-implemented method for training artificial neural generator network K 2 of the generative adversarial networks GAN is used to reproduce critical test cases or test results of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle.
- an artificial neural generator network K 1 is provided in step S 1 .
- Artificial neural generator network K 1 approximates a first parameter set P 1 of driving situation parameters.
- the driving situation parameters are made up of surroundings parameters FP 1 describing the surroundings of the motor vehicle and ego parameters FP 2 describing the status of the motor vehicle.
- First parameter set P 1 is approximated by artificial neural generator network K 1 based on randomly generated input data D.
- the critical test results or second test results E 2 from an identified subset of first test results E 1 of the virtual test of the device for the at least partial autonomous guidance of the motor vehicle are formed by first parameter set P 1 .
- an artificial neural discriminator network K 2 is furthermore provided in step S 2 .
- Artificial neural discriminator network K 2 receives first parameter set P 1 of driving situation parameters approximated by artificial neural generator network K 1 and a second parameter set K 2 provided by a data source DS.
- Second parameter set P 2 also referred to as the reference parameter set, of driving situation parameters is made up of surroundings parameters FP 1 describing the surroundings of the motor vehicle and ego parameters FP 2 describing the status of the motor vehicle.
- the identified subset of first test results El is formed by second parameter set P 2 .
- the method furthermore comprises, in step S 3 , an outputting of a function value F for evaluating first parameter set P 1 or second parameter set P 2 by artificial neural discriminator network K 2 .
- Artificial neural discriminator network K 2 evaluate in each case either first parameter set P 1 , which is generated by artificial neural generator network K 1 , or second parameter set P 2 , which is provided to artificial neural discriminator network K 2 by data source DS.
- function value F output by artificial neural discriminator network K 2 is within a first value range WB 1 , artificial neural discriminator network K 2 now determines that evaluated parameter set P 1 , P 2 has been approximated by artificial neural generator network K 1 .
- a total value range WB 3 comprising first value range WB 1 and second value range WB 2 is, for example, between numeric values 0 and 1.
- Another suitable value range may be used, for example, depending on the utilized activation function of the artificial neural discriminator network.
- Total value range WB 3 is divided into first value range WB 1 and second value range WB 2 adjacent to first value range WB 1 .
- Second parameter set P 2 of driving situation parameters provided by data source DS is generated by a simulation.
- second parameter set P 2 of driving situation parameters may be generated, for example, by an artificial neural network.
- ego parameters FP 2 are a velocity of the motor vehicle
- surrounding parameters FP 1 are a velocity of another motor vehicle and a distance between the motor vehicle and the other motor vehicle.
- the cut-in scenario may be approximated using these parameters.
- the velocity of the ego vehicle and of the other vehicle, which is also referred to as the fellow vehicle, is constant. Since the velocity of the ego vehicle is higher than that of the fellow, the ego vehicle must be braked to avoid a collision between the two vehicles.
- FIG. 2 shows a flowchart of a method for training an artificial neural generator network of generative adversarial networks according to the exemplary embodiment of the invention.
- step S 4 of the method according to the invention artificial neural generator network K 1 is trained by artificial neural discriminator network K 2 , based on the evaluation of first parameter set P 1 or second parameter set P 2 .
- input data D for the generator network is a defined number of random values between 0 and 1, which are referred to as input noise.
- the generator then generates artificial data from the input noise in the form of parameter sets, which are evaluated by the discriminator network.
- the evaluation is the actual value for the error calculation.
- the setpoint value is, for example, a numeric value of 1, since the generator network is to learn to generate data, which will be viewed as real by the discriminator network.
- the generator network is updated with the aid of the actual and setpoint values in the form of an adaptation of the weights and other parameterizable variables of the artificial neural generator network.
- Artificial neural generator network K 1 includes an input layer of 24 neurons and 5 hidden layers.
- a first hidden layer has 92 neurons and a dropout of 10%.
- a second hidden layer has 64 neurons and a dropout of 10%.
- a third hidden layer has 32 neurons and a dropout of 10%.
- a fourth hidden layer has 24 neurons without a dropout, and a fifth hidden layer has 24 neurons without a dropout.
- An output layer has 2 neurons for outputting the parameter set of driving situation parameters.
- Artificial neural discriminator network K 2 includes an input layer of 128 neurons and 6 hidden layers.
- a first hidden layer has 128 neurons and a dropout of 20%.
- a second through sixth hidden layer each have 80 neurons and a dropout of 10%.
- An output layer has 1 neuron with a sigmoid activation function.
- the hidden layers of artificial neural generator network K 1 and artificial neural discriminator layer K 2 have an ELU activation function and an Adam optimization function or make use thereof.
- the aforementioned parameterizations and configurations of artificial neural generator network K 1 and artificial neural discriminator network K 2 have particularly advantageous approximation results when applied to present target functions.
- FIG. 3 shows a flowchart of a method for training an artificial neural discriminator network of generative adversarial networks according to the exemplary embodiment of the invention.
- the method for training artificial neural discriminator network K 2 of the generative adversarial networks is used to distinguish a first parameter set P 1 of driving situation parameters from a second parameter set P 2 of driving situation parameters of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle.
- a receipt of a first parameter set of driving situation parameters approximated by an artificial neural generator network and made up of surroundings parameters FP 1 describing the surroundings of the motor vehicle and ego parameters FP 2 describing the status of the motor vehicle are received by artificial neural discriminator network K 2 in step S 1 .
- a second parameter set P 2 which is referred to as a reference parameter set, of driving situation parameters provided by a data source DS and made up of surroundings parameters FP 1 describing the surroundings of the motor vehicle and ego parameters FP 2 describing the status of the motor vehicle are furthermore received by artificial neural discriminator network K 2 in step S 2 .
- the present method furthermore comprises, in step S 3 , an approximation and outputting of a function value F for evaluating first parameter set P 1 or second parameter set P 2 by artificial neural discriminator network K 2 .
- the training of artificial neural discriminator network K 2 based on an approximation error AE calculated by comparing output function value F with a predefined setpoint value SW takes place in step S 4 .
- Function value F output by artificial neural discriminator network K 2 is an actual value IW of artificial neural discriminator network K 2 .
- Approximation error AE of artificial neural discriminator network K 2 is constituted by a difference between actual value IW and setpoint value SW.
- a weighting of artificial neural discriminator K 2 i.e. the neurons of at least the hidden layers of artificial neural discriminator network K 2 , is adapted using approximation error AE.
- FIG. 4 shows a flowchart of a partial aspect of the method for training an artificial neural discriminator network of the generative adversarial networks according to the exemplary embodiment of the invention.
- Artificial neural discriminator network K 2 is trained using mini-batches of parameter sets P 1 , P 2 of driving situation parameters. Artificial neural discriminator network K 2 alternately receives a mini-batch of first parameter set P 1 of diving situation parameters approximated by artificial neural discriminator network K 2 and a mini-batch of second parameter set P 2 of driving situation parameters provided by data source DS.
- FIG. 5 shows a flowchart of a relationship between the training of the artificial neural generator network of the generative adversarial networks according to the exemplary embodiment of the invention.
- Artificial neural generator network K 1 and artificial neural discriminator network K 2 are trained in turns. Artificial neural discriminator network K 2 is trained more frequently than artificial neural generator network K 1 , the discriminator network being trained in a cycle for multiple steps before the generator network is trained for one step. The next training cycle then begins.
- FIG. 6 shows a function diagram of a target function approximated by the artificial neural generator network according to the exemplary embodiment of the invention.
- the present representation illustrates the safety target function approximately by artificial neural generator network K 1 .
- the safety target function is applied, for example, to a traffic situation, in which a highly automated or autonomous vehicle travels in a predetermined lane, and another vehicle having a reduced velocity compared to the ego vehicle cuts into the lane of the ego vehicle from another lane at a certain distance.
- a traffic situation of this type is referred to as a cut-in scenario.
- the area represented in FIG. 6 and enclosed by the essentially triangular geometry represents a subset, identified by a simulation, of test results of interest of the virtual test of the device for the at least partial autonomous guidance of the motor vehicle.
- Critical test results are test results, in which a collision of the ego vehicle with the fellow vehicle or of limit values or a defined limit range of the occurrence of a collision between the two vehicles occurs in the cut-in scenario.
- the point values illustrated in the diagram which are primarily surrounded by the essentially triangular area, correspond to the function values approximated by artificial neural generator network K 1 with respect to the safety target function of the cut-in scenario.
- the artificial neural generator network i.e. the algorithm underlying the network, is thus advantageously suitable for a relatively accurate approximation of the critical test results of interest.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Automation & Control Theory (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Feedback Control In General (AREA)
- Electric Propulsion And Braking For Vehicles (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
- This nonprovisional application claims priority under 35 U.S.C. § 119(a) to German Patent Application No. 10 2019 122 414.4, which was filed in Germany on Aug. 21, 2019, European Patent Application No. 19192741.7, which was filed in Europe on Aug. 21, 2019 and European Patent Application No. 19192743.3, which was filed in Europe on Aug. 21, 2019, and which are all herein incorporated by reference.
- The present invention relates to a computer-implemented method for training an artificial neural generator network of generative adversarial networks for the purpose of approximating second test results from an identified subset of first test results of a virtual test of a device for at the least partial autonomous guidance of a motor vehicle.
- The present invention also relates to a computer-implemented method for training an artificial neural discriminator network of generative adversarial networks for the purpose of distinguishing a first parameter set of driving situation parameters from a second parameter set of driving situation parameters of a virtual test of a device for at the least partial autonomous guidance of a motor vehicle.
- The present invention further relates to a test unit for approximating second test results from an identified subset of first test results of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle, using generative adversarial networks which are configured to train an artificial neural generator network.
- The present invention further relates to a computer program and a computer-readable data carrier.
- Driver assistance systems, e.g. an adaptive speed controller and/or functions for highly automated driving, may be verified and validated with the aid of various checking mechanisms. In particular, hardware-in-the-loop methods, software-in-the-loop methods, simulations and/or test drives may be used.
- The expenditure, in particular the time and cost expenditure, for testing vehicle functions of this type, using the aforementioned checking methods, is typically very high, since a large number of potentially possible driving situations must be tested.
- This may result, in particular, in a high complexity for test drives as well as for simulations. DE 10 2017 200 180 A1 specifies a method for verifying and/or validating a vehicle function, which is provided for autonomously guiding a vehicle in the longitudinal and/or transverse direction(s).
- The method comprises an ascertainment, based on surroundings data relating to surroundings of the vehicle, a test control instruction of the vehicle function to an actuator of the vehicle, the test control instruction not being implemented by the actuator.
- The method also comprises a simulation, based on surroundings data and using a traffic participant model relating to at least one traffic participant in the surroundings of the vehicle, a fictitious traffic situation, which would be present if the test control instruction had been implemented.
- The method further comprises a provision of test data relating to the fictitious traffic situation. The traffic function is passively operated in the vehicle to ascertain the test control instruction.
- The disadvantage of this method is that, to verify and/or validate the vehicle function, an actual operation of the vehicle is necessary to ascertain the necessary data.
- As a result, there is a need to improve existing methods and systems for the purpose of being able to efficiently ascertain so-called critical test cases within the scope of a scenario-based testing for systems and system components during highly automated driving.
- It is therefore an object of the present invention to provide a method, a test unit, a computer program and a computer-readable data carrier, which make it possible to be able to ascertain critical test cases within the scope of a scenario-based testing for systems and system components during highly automated driving.
- According to an exemplary embodiment of the invention, the object is achieved by a computer-implemented method for training an artificial neural generator network of generative adversarial networks, a computer-implemented method for training an artificial neural discriminator network of generative adversarial networks, a test unit, a computer program and a computer-readable data carrier, for example a non-transitory computer readable media.
- The invention relates to a computer-implemented method for training an artificial neural generator network of generative adversarial networks (GAN) for the purpose of approximating second test results from an identified subset of first test results of a virtual test of a device for at the least partial autonomous guidance of a motor vehicle.
- The generative adversarial networks include the artificial neural generator network and an artificial neural discriminator network.
- The method comprises a provision of the artificial neural generator network, which approximates a first parameter set of driving situation parameters, made up of the surroundings parameters describing the surroundings of the motor vehicle and the ego parameters describing the status of the motor vehicle, based on randomly generated input data.
- The second test results are formed by the first parameter set. The method also comprises a provision of the artificial neural discriminator network, which receives the first parameter set of driving situation parameters approximated by the artificial neural generator network and a second parameter set, in particular a reference parameter set, of driving situation parameters provided by a data source and made up of the surroundings parameters describing the surroundings of the motor vehicle and ego parameters describing the status of the motor vehicle.
- The identified subset of first test results is formed by the second parameter set. The method furthermore comprises an outputting of a function value for evaluating the first parameter set or the second parameter set by the artificial neural discriminator network.
- In addition, the method comprises a training of the artificial neural generator network based on the evaluation of the first parameter set or the second parameter set by the artificial neural discriminator network.
- In this way, the artificial neural generator network may be advantageously trained to approximate parameter sets which are similar to the second parameter sets received from the data source by the artificial neural discriminator network.
- The second parameter sets provided to the artificial neural discriminator network by the data source may thus be reproduced by the artificial neural generator network within a predefined definition range. The second data sets received by the artificial neural discriminator network, which may be generated by a simulation or an artificial neural network, are critical test results of interest from a total quantity of test results.
- The test results relate to specific traffic situations used in scenario-based testing for validating, checking and/or verifying functions of a system or control unit for the at least partial autonomous guidance of a motor vehicle.
- In the case of scenario-based testing of systems and system components for the autonomous guidance of a motor vehicle, scenarios are defined, which may be referred to as an abstraction of a traffic situation. One logical scenario is the abstraction of a traffic situation, including the road, the driving behavior and the surround traffic without any specific parameter values.
- The selection of a specific parameter values turns the logical scenario into a specific scenario. A specific scenario of this type corresponds to an individual traffic situation in each case.
- The method according to the invention also comprises the step that, if the function value output by the artificial neural discriminator network is within a first value range, the artificial neural discriminator network determines that the evaluated parameter set has been approximated by the artificial neural generator network and, if the function value output by the artificial neural discriminator network is within a second value range, the artificial neural discriminator network determines that the evaluated parameter set has been provided by the data source.
- The artificial neural discriminator network us thus advantageously able to make a decision, based on the output function value, whether the evaluated parameter set was approximated by the generator network or provided by the data source, due to the division of the first value range and the second value range.
- The method according to the invention further comprises the step that a total value range comprising the first value range and the second value range is between 0 and 1, the total value range being divided into the first value range and the second value range adjacent to the first value range.
- Due to this seamless transition between the first value range and the second value range, an accurate division of the output function values into the first value range or the second value rang may be advantageously carried out within the defined total value range.
- The method according to the invention also comprises the step that the second parameter set of driving situation parameters provided by the data source is generated by an artificial neural network or by a simulation. The second parameter set of driving situation parameters may thus be advantageously obtained from a plurality of data sources or generated in different ways.
- The method according to the invention also comprises the step that the ego parameters include a velocity of the motor vehicle, and the surroundings parameters include a velocity of another motor vehicle and a distance between the motor vehicle and the other motor vehicle.
- A so-called cut-in scenario, for example, may be approximated using these parameters. The cut-in scenario may thus be referred to as a traffic situation, in which a highly automated or autonomous vehicle travels in a predetermined lane, and another vehicle having a reduced velocity compared to the ego vehicle cuts into the lane of the ego vehicle from another lane at a certain distance.
- The velocity of the ego vehicle and of the other vehicle, which is also referred to as the fellow vehicle, is constant. Since the velocity of the ego vehicle is higher than that of the fellow, the ego vehicle must be braked to avoid a collision between the two vehicles.
- Based on the aforementioned ego parameters and surroundings parameters, critical traffic situations may thus be approximated by the method according to the invention within a predefined definition range of aforementioned parameters.
- The method according to the invention also comprises the step that the artificial neural generator network includes an input layer having 24 neurons and 5 hidden layers, a first hidden layer having 92 neurons and a dropout of 10%, a second hidden layer having 64 neurons and a dropout of 10%, a third hidden layer having 32 neurons and a dropout of 10%, a fourth hidden layer having 24 neurons without a dropout and a fifth hidden layer having 24 neurons without a dropout, and an output layer having 2 neurons for outputting the parameter set of driving situation parameters.
- Such a configuration of the artificial neural generator network is advantageously able to approximate a target function to be approximated with a high degree of accuracy.
- The method according to the invention also comprises the step that the artificial neural discriminator network includes an input layer having 128 neurons and 6 hidden layers, a first hidden layer having 128 neurons and a dropout of 20% and a second through sixth hidden layer having 80 neurons and a dropout of 10%, and an output layer having 1 neuron with a sigmoid activation function.
- Such a configuration of the artificial neural discriminator network advantageously enables the artificial neural discriminator network to be able to distinguish between the parameter sets generated by the artificial neural generator network and the parameter sets provided by the data source with a high degree of accuracy.
- The method according to the invention also comprises the step that the hidden layers of the artificial neural generator network and the artificial neural discriminator network use an ELU activation function and an Adam optimization method.
- Optimal results may thus be advantageously achieved by the artificial neural generator network and the artificial neural discriminator network. The present invention also relates to a computer-implemented method for training an artificial neural discriminator network of generative adversarial networks for the purpose of distinguishing a first parameter set of driving situation parameters from a second parameter set of driving situation parameters of a virtual test of a device for at the least partial autonomous guidance of a motor vehicle.
- The method comprises a receipt, by the artificial neural discriminator network, of a first parameter set of driving situation parameters approximated by an artificial neural generator network, made up of surroundings parameters describing the surroundings of the motor vehicle and ego parameters describing the status of the motor vehicle.
- The method also comprises a receipt, by the artificial neural discriminator network, of a second parameter set, in particular a reference parameter set, of driving situation parameters provided by a data source and made up of surroundings parameters describing the surroundings of the motor vehicle and ego parameters describing the status of the motor vehicle.
- The method further comprises an approximation and outputting of a function value for evaluating the first parameter set or the second parameter set by the artificial neural network. Moreover, the method includes a training of the artificial neural discriminator network based on an approximation error calculated by comparing the output function value with a predefined setpoint value.
- The artificial neural discriminator network is thus able to evaluate the first parameter set generated by the artificial neural generator network as well as the second parameter set provided by the data source and calculate an approximation error of the artificial neural discriminator network, based on the evaluation of the first parameter set or the second parameter set.
- The artificial neural discriminator network thus evaluates how similar are the parameter sets generated by the artificial neural generator network to the second parameter sets provided by the data source. The evaluation, i.e. the function value output by the artificial neural discriminator network, has a numeric value, which may be, for example, between 0 and 1.
- An evaluation of 0 may indicate, for example, that the artificial neural discriminator network views the evaluated parameter set as having been generated by the artificial neural generator network. If the evaluation is equal to 1, the artificial neural discriminator network considers the evaluated parameter set as having been provided by the data source.
- The parameter sets provided by the data source may be viewed as real data, due to the fact that they may originate, for example, in a simulation or a real test operation of a motor vehicle. In contrast, the parameter sets generated by the artificial neural generator network are viewed as having been artificially generated. The task of the artificial neural discriminator network is therefore to distinguish between real and artificially generated data.
- The role of the artificial neural generator network is to generate, based on feedback from the artificial neural discriminator network, parameter sets of driving situation parameters which correspond to the real data provided by the data source within a predefined definition range.
- The method according to the invention also comprises the step that the function value output by the artificial neural discriminator network is an actual value of the artificial neural discriminator network, the approximation error of the artificial neural discriminator network being constituted by a difference between the actual value and the setpoint value, and a weighting of the artificial neural discriminator network being adapted using the approximation error.
- By adapting the weighting of the artificial neural discriminator network using the ascertained approximation error, the artificial neural discriminator network is thus advantageously trained to improve its evaluation result, whether the received parameter sets are parameter sets provided by the data source or parameter sets generated by the artificial neural generator network.
- The method according to the invention also comprises the step that the artificial neural generator network and the artificial neural discriminator network are alternately trained, the artificial neural discriminator network being trained more frequently than the artificial neural generator network.
- This procedure has positive effects on the training process, since the particular networks are thus trained in a training cycle, the artificial neural discriminator network being trained more frequently than the artificial neural generator network, i.e. a training cycle of the artificial neural discriminator network is thus longer than a training cycle of the artificial neural generator network.
- The discriminator network is therefore trained in a cycle for multiple steps before the generator network is trained for one step. The next training cycle then begins.
- The method according to the invention also comprises the step that the artificial neural discriminator network is trained using mini-batches of the parameter sets of driving situation parameters, the artificial neural discriminator network alternately receiving a mini-batch of the first parameter set of driving situation parameters approximated by the artificial neural generator network and a mini-batch of the second parameter set of driving situation parameters provided by the data source.
- The use of mini-batches of the parameter sets of driving situation parameters for training the artificial neural discriminator network has the advantage that the occurrence of a mode collapse during the training of the generative adversarial networks may be avoided hereby.
- The training of the generative adversarial networks may thus be advantageously stabilized by using the provision of the parameter sets in the form of mini-batches.
- The invention furthermore relates to a test unit for approximating second test results from an identified subset of first test results of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle, using generative adversarial networks, which are configured to train an artificial neural generator network.
- The generative adversarial networks include an artificial neural generator network, which is configured to approximate a first parameter set of driving situation parameters made up of the surroundings parameters describing the surroundings of the motor vehicle and the ego parameters describing the status of the motor vehicles, based on randomly generated input data, the second test results being formed by the first parameter set.
- In addition, the generative adversarial networks include an artificial neural discriminator network, which is configured to receive the first parameter set of driving situation parameters approximated by the artificial neural generator network and a second parameter set, in particular a reference parameter set, of driving situation parameters provided by a data source and made up of the surroundings parameters describing the surroundings of the motor vehicle and ego parameters describing the status of the motor vehicle.
- The identified subset of first test results is formed by the second parameter set. The artificial neural discriminator network is configured to output a function value for evaluating the first parameter set or the second parameter set. The artificial neural discriminator network is also configured to train the artificial neural generator network based on the evaluation of the first parameter set or the second parameter set.
- Generative adversarial networks of this type have the advantage that a large number of critical test results, i.e. second test results from an identified subset of first test results of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle, for example a control unit, may be approximated.
- For example, time or resource savings may be achieved hereby, since the applicable parameter sets may be approximated by the generative adversarial networks in an efficient manner, whereby, on the one hand, a time advantage over conventional simulations or a real test operation of the motor vehicle exists and, on the other hand, computing power demand is reduced.
- The invention also relates to a computer program, including program code, for the purpose of carrying out the method according to the invention for training the artificial neural generator network and/or the method for training the artificial neural discriminator network when the computer program is run on a computer.
- The invention further relates to a computer-readable data carrier, including program code of a computer program, for the purpose of carrying out the method for training the artificial neural generator network and/or the method for training the artificial neural discriminator network when the computer program is run on a computer.
- The features described herein of the generative adversarial networks, which are configured to train the artificial neural generator network, are suitable for use with any type of component, whether software-based or hardware-based, i.e. software-in-the-loop or hardware-in-the-loop systems, simulations or components within the scope of test drives of the motor vehicle.
- Likewise, the generative adversarial networks are disclosed within the scope of any vehicle type, for example an automobile, a commercial vehicle, a ship or an aircraft.
- Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating exemplary embodiments of the invention, are given by way of illustration only, since various changes, combinations, and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.
- The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus, are not limitive of the present invention, and wherein:
-
FIG. 1 shows a block diagram of a test unit according to an exemplary embodiment of the invention; -
FIG. 2 shows a flowchart of a method for training an artificial neural generator network of the generative adversarial networks; -
FIG. 3 shows a flowchart of a method for training an artificial neural discriminator network of the generative adversarial networks; -
FIG. 4 shows a flowchart of a partial aspect of the method for training the artificial neural discriminator network of the generative adversarial networks; -
FIG. 5 shows a flowchart of a relationship between the training of the artificial neural generator network and the artificial neural discriminator network of the generative adversarial networks; and -
FIG. 6 shows a function diagram of a target function approximated by the artificial neural generator network. -
FIG. 1 shows a block diagram of a test unit according to an exemplary embodiment of the invention.Test unit 1 is configured to approximate second test results E2 from an identified subset of first test results E1 o a virtual test of a device for the at least partial autonomous guidance of a motor vehicle, using generative adversarial networks GAN. - As illustrated,
test unit 1 comprises generative adversarial networks GAN. The test unit preferably has a software-based design in the form of a software-in-the-loop system. Alternatively, the test unit may have, for example, a hardware-based design in the form of a hardware-in-the-loop system. - Generative adversarial networks GAN include artificial neural generator network K1 and artificial neural discriminator network K2 and are configured to train both artificial neural generator network K1 and artificial neural discriminator network K2.
- The computer-implemented method for training artificial neural generator network K2 of the generative adversarial networks GAN is used to reproduce critical test cases or test results of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle. Within the scope of the present method, an artificial neural generator network K1 is provided in step S1.
- Artificial neural generator network K1 approximates a first parameter set P1 of driving situation parameters. The driving situation parameters are made up of surroundings parameters FP1 describing the surroundings of the motor vehicle and ego parameters FP2 describing the status of the motor vehicle.
- First parameter set P1 is approximated by artificial neural generator network K1 based on randomly generated input data D. The critical test results or second test results E2 from an identified subset of first test results E1 of the virtual test of the device for the at least partial autonomous guidance of the motor vehicle are formed by first parameter set P1.
- Within the scope of the present method, an artificial neural discriminator network K2 is furthermore provided in step S2. Artificial neural discriminator network K2 receives first parameter set P1 of driving situation parameters approximated by artificial neural generator network K1 and a second parameter set K2 provided by a data source DS.
- Second parameter set P2, also referred to as the reference parameter set, of driving situation parameters is made up of surroundings parameters FP1 describing the surroundings of the motor vehicle and ego parameters FP2 describing the status of the motor vehicle.
- The identified subset of first test results El is formed by second parameter set P2. The method furthermore comprises, in step S3, an outputting of a function value F for evaluating first parameter set P1 or second parameter set P2 by artificial neural discriminator network K2.
- Artificial neural discriminator network K2 evaluate in each case either first parameter set P1, which is generated by artificial neural generator network K1, or second parameter set P2, which is provided to artificial neural discriminator network K2 by data source DS.
- If function value F output by artificial neural discriminator network K2 is within a first value range WB1, artificial neural discriminator network K2 now determines that evaluated parameter set P1, P2 has been approximated by artificial neural generator network K1.
- If function value F output by artificial neural discriminator network K2 is within a second value range WB2, artificial neural discriminator network K2 further determines that evaluated parameter set P1, P2 has been provided by data source DS. A total value range WB3 comprising first value range WB1 and second value range WB2 is, for example, between
numeric values 0 and 1. - Alternatively, another suitable value range may be used, for example, depending on the utilized activation function of the artificial neural discriminator network.
- Total value range WB3 is divided into first value range WB1 and second value range WB2 adjacent to first value range WB1.
- Second parameter set P2 of driving situation parameters provided by data source DS is generated by a simulation. Alternatively, second parameter set P2 of driving situation parameters may be generated, for example, by an artificial neural network.
- In the present exemplary embodiment, ego parameters FP2 are a velocity of the motor vehicle, and surrounding parameters FP1 are a velocity of another motor vehicle and a distance between the motor vehicle and the other motor vehicle.
- The cut-in scenario, for example, may be approximated using these parameters. The velocity of the ego vehicle and of the other vehicle, which is also referred to as the fellow vehicle, is constant. Since the velocity of the ego vehicle is higher than that of the fellow, the ego vehicle must be braked to avoid a collision between the two vehicles.
-
FIG. 2 shows a flowchart of a method for training an artificial neural generator network of generative adversarial networks according to the exemplary embodiment of the invention. - In step S4 of the method according to the invention, artificial neural generator network K1 is trained by artificial neural discriminator network K2, based on the evaluation of first parameter set P1 or second parameter set P2.
- In the present exemplary embodiment, input data D for the generator network is a defined number of random values between 0 and 1, which are referred to as input noise. The generator then generates artificial data from the input noise in the form of parameter sets, which are evaluated by the discriminator network. The evaluation is the actual value for the error calculation.
- The setpoint value is, for example, a numeric value of 1, since the generator network is to learn to generate data, which will be viewed as real by the discriminator network. The generator network is updated with the aid of the actual and setpoint values in the form of an adaptation of the weights and other parameterizable variables of the artificial neural generator network.
- Artificial neural generator network K1 includes an input layer of 24 neurons and 5 hidden layers. A first hidden layer has 92 neurons and a dropout of 10%.
- A second hidden layer has 64 neurons and a dropout of 10%. A third hidden layer has 32 neurons and a dropout of 10%. A fourth hidden layer has 24 neurons without a dropout, and a fifth hidden layer has 24 neurons without a dropout. An output layer has 2 neurons for outputting the parameter set of driving situation parameters.
- Artificial neural discriminator network K2 includes an input layer of 128 neurons and 6 hidden layers.
- A first hidden layer has 128 neurons and a dropout of 20%. A second through sixth hidden layer each have 80 neurons and a dropout of 10%. An output layer has 1 neuron with a sigmoid activation function.
- The hidden layers of artificial neural generator network K1 and artificial neural discriminator layer K2 have an ELU activation function and an Adam optimization function or make use thereof. The aforementioned parameterizations and configurations of artificial neural generator network K1 and artificial neural discriminator network K2 have particularly advantageous approximation results when applied to present target functions.
-
FIG. 3 shows a flowchart of a method for training an artificial neural discriminator network of generative adversarial networks according to the exemplary embodiment of the invention. - The method for training artificial neural discriminator network K2 of the generative adversarial networks is used to distinguish a first parameter set P1 of driving situation parameters from a second parameter set P2 of driving situation parameters of a virtual test of a device for the at least partial autonomous guidance of a motor vehicle.
- Within the scope of the present method, a receipt of a first parameter set of driving situation parameters approximated by an artificial neural generator network and made up of surroundings parameters FP1 describing the surroundings of the motor vehicle and ego parameters FP2 describing the status of the motor vehicle are received by artificial neural discriminator network K2 in step S1.
- Within the scope of the present invention, a second parameter set P2, which is referred to as a reference parameter set, of driving situation parameters provided by a data source DS and made up of surroundings parameters FP1 describing the surroundings of the motor vehicle and ego parameters FP2 describing the status of the motor vehicle are furthermore received by artificial neural discriminator network K2 in step S2.
- The present method furthermore comprises, in step S3, an approximation and outputting of a function value F for evaluating first parameter set P1 or second parameter set P2 by artificial neural discriminator network K2. The training of artificial neural discriminator network K2 based on an approximation error AE calculated by comparing output function value F with a predefined setpoint value SW takes place in step S4.
- Function value F output by artificial neural discriminator network K2 is an actual value IW of artificial neural discriminator network K2.
- Approximation error AE of artificial neural discriminator network K2 is constituted by a difference between actual value IW and setpoint value SW. A weighting of artificial neural discriminator K2, i.e. the neurons of at least the hidden layers of artificial neural discriminator network K2, is adapted using approximation error AE.
-
FIG. 4 shows a flowchart of a partial aspect of the method for training an artificial neural discriminator network of the generative adversarial networks according to the exemplary embodiment of the invention. - Artificial neural discriminator network K2 is trained using mini-batches of parameter sets P1, P2 of driving situation parameters. Artificial neural discriminator network K2 alternately receives a mini-batch of first parameter set P1 of diving situation parameters approximated by artificial neural discriminator network K2 and a mini-batch of second parameter set P2 of driving situation parameters provided by data source DS.
-
FIG. 5 shows a flowchart of a relationship between the training of the artificial neural generator network of the generative adversarial networks according to the exemplary embodiment of the invention. - Artificial neural generator network K1 and artificial neural discriminator network K2 are trained in turns. Artificial neural discriminator network K2 is trained more frequently than artificial neural generator network K1, the discriminator network being trained in a cycle for multiple steps before the generator network is trained for one step. The next training cycle then begins.
-
FIG. 6 shows a function diagram of a target function approximated by the artificial neural generator network according to the exemplary embodiment of the invention. - The present representation illustrates the safety target function approximately by artificial neural generator network K1.
- The safety target function is applied, for example, to a traffic situation, in which a highly automated or autonomous vehicle travels in a predetermined lane, and another vehicle having a reduced velocity compared to the ego vehicle cuts into the lane of the ego vehicle from another lane at a certain distance. A traffic situation of this type is referred to as a cut-in scenario.
- The area represented in
FIG. 6 and enclosed by the essentially triangular geometry represents a subset, identified by a simulation, of test results of interest of the virtual test of the device for the at least partial autonomous guidance of the motor vehicle. - Critical test results are test results, in which a collision of the ego vehicle with the fellow vehicle or of limit values or a defined limit range of the occurrence of a collision between the two vehicles occurs in the cut-in scenario.
- The point values illustrated in the diagram, which are primarily surrounded by the essentially triangular area, correspond to the function values approximated by artificial neural generator network K1 with respect to the safety target function of the cut-in scenario. The artificial neural generator network, i.e. the algorithm underlying the network, is thus advantageously suitable for a relatively accurate approximation of the critical test results of interest.
- Although specific embodiments have been illustrated and described herein, it is understandable to those skilled in the art that a large number of alternative and/or equivalent implementations exist. It should be noted that the exemplary embodiment or exemplary embodiments is/are only examples and are not used to limit the scope, the applicability or the configuration in any way.
- Rather, the aforementioned summary and detailed description provide those skilled in the air with a convenient set of instructions on the implementation of at least one exemplary embodiment, it being understandable that different modifications in the range of functions and the arrangement of the elements may be carried out without deviating from the scope of the attached claims and their legal equivalents. This application generally intends to cover changes and adaptations or variations in the embodiments illustrated herein.
- The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are to be included within the scope of the following claims.
Claims (15)
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102019122414.4 | 2019-08-21 | ||
| EP19192743.3 | 2019-08-21 | ||
| EP19192741.7 | 2019-08-21 | ||
| EP19192741.7A EP3783446B1 (en) | 2019-08-21 | 2019-08-21 | Computer-implemented method and test unit for approximating a subset of test results |
| DE102019122414 | 2019-08-21 | ||
| EP19192743.3A EP3783452B1 (en) | 2019-08-21 | 2019-08-21 | Computer-implemented method and test unit for approximizing test results and method for providing a trained, artificial neural network |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20210056432A1 true US20210056432A1 (en) | 2021-02-25 |
Family
ID=74645737
Family Applications (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/999,533 Abandoned US20210056432A1 (en) | 2019-08-21 | 2020-08-21 | Method for training an artificial neural generator network, method for training an artificial neural discriminator network, and test unit |
| US17/559,828 Active 2041-05-30 US12254415B2 (en) | 2019-08-21 | 2021-12-22 | Computer-implemented method and test unit for approximating a subset of test results |
| US17/559,716 Pending US20220138575A1 (en) | 2019-08-21 | 2021-12-22 | Computer implemented method and test unit for approximating test results and a method for providing a trained, artificial neural network |
Family Applications After (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/559,828 Active 2041-05-30 US12254415B2 (en) | 2019-08-21 | 2021-12-22 | Computer-implemented method and test unit for approximating a subset of test results |
| US17/559,716 Pending US20220138575A1 (en) | 2019-08-21 | 2021-12-22 | Computer implemented method and test unit for approximating test results and a method for providing a trained, artificial neural network |
Country Status (1)
| Country | Link |
|---|---|
| US (3) | US20210056432A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113947187A (en) * | 2021-10-14 | 2022-01-18 | 广西大学 | An anti-collapse multi-time-scale generative adversarial network generator parameter correction method |
| GB2598802A (en) * | 2020-08-24 | 2022-03-16 | Motional Ad Llc | Driving scenario sampling for training/tuning machine learning models for vehicles |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| IN202211070987A (en) * | 2022-12-08 | 2022-12-16 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180293713A1 (en) * | 2017-04-06 | 2018-10-11 | Pixar | Denoising monte carlo renderings using machine learning with importance sampling |
| US20180336471A1 (en) * | 2017-05-19 | 2018-11-22 | Mehdi Rezagholizadeh | Semi-supervised regression with generative adversarial networks |
| US20190130278A1 (en) * | 2017-10-26 | 2019-05-02 | Nvidia Corporation | Progressive modification of generative adversarial neural networks |
| US20190259153A1 (en) * | 2018-02-22 | 2019-08-22 | Siemens Healthcare Gmbh | Cross domain medical image segmentation |
| US20190392580A1 (en) * | 2018-06-26 | 2019-12-26 | Definiens Ag | Deep Learning Method For Tumor Cell Scoring On Cancer Biopsies |
| US20200134494A1 (en) * | 2018-10-26 | 2020-04-30 | Uatc, Llc | Systems and Methods for Generating Artificial Scenarios for an Autonomous Vehicle |
Family Cites Families (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CA3067160A1 (en) * | 2015-02-10 | 2016-08-18 | Mobileye Vision Technologies Ltd. | Sparse map for autonomous vehicle navigation |
| US9761007B2 (en) * | 2015-06-16 | 2017-09-12 | Disney Enterprises, Inc. | Method and system for approximating fully calculated image sequence from a subset of calculated image data |
| DE102016000493B4 (en) | 2016-01-19 | 2017-10-19 | Audi Ag | Method for operating a vehicle system and motor vehicle |
| JP6477551B2 (en) | 2016-03-11 | 2019-03-06 | トヨタ自動車株式会社 | Information providing apparatus and information providing program |
| DE102016208076A1 (en) | 2016-05-11 | 2017-11-16 | Continental Teves Ag & Co. Ohg | METHOD AND DEVICE FOR EVALUATING AN INPUT VALUE IN A DRIVER ASSISTANCE SYSTEM, DRIVER ASSISTANCE SYSTEM AND TEST SYSTEM FOR A DRIVER ASSISTANCE SYSTEM |
| US11062207B2 (en) | 2016-11-04 | 2021-07-13 | Raytheon Technologies Corporation | Control systems using deep reinforcement learning |
| US20180174042A1 (en) | 2016-12-20 | 2018-06-21 | Intel Corporation | Supervised training and pattern matching techniques for neural networks |
| DE102017200180A1 (en) * | 2017-01-09 | 2018-07-12 | Bayerische Motoren Werke Aktiengesellschaft | Method and test unit for the motion prediction of road users in a passively operated vehicle function |
| CN106842925B (en) | 2017-01-20 | 2019-10-11 | 清华大学 | A method and system for intelligent locomotive manipulation based on deep reinforcement learning |
| CN107169567B (en) | 2017-03-30 | 2020-04-07 | 深圳先进技术研究院 | Method and device for generating decision network model for automatic vehicle driving |
| US20180336439A1 (en) | 2017-05-18 | 2018-11-22 | Intel Corporation | Novelty detection using discriminator of generative adversarial network |
| US20180336437A1 (en) * | 2017-05-19 | 2018-11-22 | Nec Laboratories America, Inc. | Streaming graph display system with anomaly detection |
| DE102018200816B3 (en) | 2018-01-18 | 2019-02-07 | Audi Ag | Method and analysis device for determining user data that describes a user behavior in a motor vehicle |
| DE102018112929A1 (en) | 2018-05-30 | 2018-07-26 | FEV Europe GmbH | Method for validating a driver assistance system using further generated test input data sets |
| US10482003B1 (en) * | 2018-11-09 | 2019-11-19 | Aimotive Kft. | Method and system for modifying a control unit of an autonomous car |
| US11940790B2 (en) * | 2018-12-12 | 2024-03-26 | Allstate Insurance Company | Safe hand-off between human driver and autonomous driving system |
| US10788396B2 (en) * | 2018-12-19 | 2020-09-29 | Fca Us Llc | Using an artificial neural network for combustion phasing control in a spark ignited internal combustion engine |
| CN109634120B (en) | 2018-12-26 | 2022-06-03 | 东软集团(北京)有限公司 | Vehicle control method and device |
| US11656620B2 (en) * | 2018-12-31 | 2023-05-23 | Luminar, Llc | Generating environmental parameters based on sensor data using machine learning |
| US20210197720A1 (en) * | 2019-12-27 | 2021-07-01 | Lyft, Inc. | Systems and methods for incident detection using inference models |
-
2020
- 2020-08-21 US US16/999,533 patent/US20210056432A1/en not_active Abandoned
-
2021
- 2021-12-22 US US17/559,828 patent/US12254415B2/en active Active
- 2021-12-22 US US17/559,716 patent/US20220138575A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180293713A1 (en) * | 2017-04-06 | 2018-10-11 | Pixar | Denoising monte carlo renderings using machine learning with importance sampling |
| US20180336471A1 (en) * | 2017-05-19 | 2018-11-22 | Mehdi Rezagholizadeh | Semi-supervised regression with generative adversarial networks |
| US20190130278A1 (en) * | 2017-10-26 | 2019-05-02 | Nvidia Corporation | Progressive modification of generative adversarial neural networks |
| US20190259153A1 (en) * | 2018-02-22 | 2019-08-22 | Siemens Healthcare Gmbh | Cross domain medical image segmentation |
| US20190392580A1 (en) * | 2018-06-26 | 2019-12-26 | Definiens Ag | Deep Learning Method For Tumor Cell Scoring On Cancer Biopsies |
| US20200134494A1 (en) * | 2018-10-26 | 2020-04-30 | Uatc, Llc | Systems and Methods for Generating Artificial Scenarios for an Autonomous Vehicle |
Non-Patent Citations (5)
| Title |
|---|
| Ding et al, 2/24/2019, "A New Multi-vehicle Trajectory Generator to Simulate Vehicle-to-Vehicle Encounters" (Year: 2019) * |
| Heusel et al, 2017, "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium" (Year: 2017) * |
| Honig & Werman, 2018, "Image declipping with deep networks" (Year: 2018) * |
| Huang et al, 2018, "Source localization using deep neural networks in a shallow water environment" (Year: 2018) * |
| Steinberg et al, 1991, "A neural network approach to source localization" (Year: 1991) * |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2598802A (en) * | 2020-08-24 | 2022-03-16 | Motional Ad Llc | Driving scenario sampling for training/tuning machine learning models for vehicles |
| US11364927B2 (en) | 2020-08-24 | 2022-06-21 | Motional Ad Llc | Driving scenario sampling for training/tuning machine learning models for vehicles |
| GB2598802B (en) * | 2020-08-24 | 2023-08-02 | Motional Ad Llc | Driving scenario sampling for training/tuning machine learning models for vehicles |
| US11938957B2 (en) | 2020-08-24 | 2024-03-26 | Motional Ad Llc | Driving scenario sampling for training/tuning machine learning models for vehicles |
| CN113947187A (en) * | 2021-10-14 | 2022-01-18 | 广西大学 | An anti-collapse multi-time-scale generative adversarial network generator parameter correction method |
Also Published As
| Publication number | Publication date |
|---|---|
| US12254415B2 (en) | 2025-03-18 |
| US20220138575A1 (en) | 2022-05-05 |
| US20220138094A1 (en) | 2022-05-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20210056432A1 (en) | Method for training an artificial neural generator network, method for training an artificial neural discriminator network, and test unit | |
| CN111332283A (en) | Method and system for controlling a motor vehicle | |
| US20230394896A1 (en) | Method and a system for testing a driver assistance system for a vehicle | |
| King et al. | A taxonomy and survey on validation approaches for automated driving systems | |
| CN112382165B (en) | Driving strategy generation method, device, medium, equipment and simulation system | |
| Arcaini et al. | Targeting patterns of driving characteristics in testing autonomous driving systems | |
| US20230236088A1 (en) | Computer-aided method and device for predicting speeds for vehicles on the basis of probability | |
| US20230177241A1 (en) | Method for determining similar scenarios, training method, and training controller | |
| CN115735107A (en) | Computer-implemented method for providing a test procedure for a traffic scenario to be tested | |
| US10913455B2 (en) | Method for the improved detection of objects by a driver assistance system | |
| CN115372014A (en) | Computer-implemented method for testing and/or authenticating a driving function to be tested | |
| JP7095467B2 (en) | Training data evaluation device, training data evaluation method, and program | |
| KR20230088719A (en) | A method for providing machine learning control for vehicle control based on provided vehicle sensor data. | |
| CN114174935B (en) | Computer-implemented method and test unit for approximating a subset of test results | |
| Subiantoro et al. | Adaptive cruise control based on multistage predictive control approach | |
| Lee et al. | Wisemove: A framework for safe deep reinforcement learning for autonomous driving | |
| US20230195977A1 (en) | Method and system for classifying scenarios of a virtual test, and training method | |
| US20250381990A1 (en) | Method for predicting an influence of one road user on at least one other road user, and method for operating a vehicle | |
| KR20210023722A (en) | Method for testing a system to a request | |
| Yang et al. | Adaptive testing for connected and automated vehicles with sparse control variates in overtaking scenarios | |
| US12291221B2 (en) | Computer-implemented method for terminating a scenario-based test process of a driving assistance system | |
| Nenchev | One stack, diverse vehicles: checking safe portability of automated driving software | |
| Lin et al. | Car following modelling with constrained generative adversarial imitation learning | |
| Meng et al. | Efficient precision-driven scenario design: Tailoring collision type probabilities for richer autonomous testing | |
| Barhoumi et al. | Formal Safety Guarantees for Autonomous Vehicles using Barrier Certificates |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: DSPACE DIGITAL SIGNAL PROCESSING AND CONTROL ENGINEERING GMBH, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BANNENBERG, SEBASTIAN;LORENZ, FABIAN;RASCHE, RAINER;SIGNING DATES FROM 20200820 TO 20200825;REEL/FRAME:053686/0036 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| AS | Assignment |
Owner name: DSPACE GMBH, GERMANY Free format text: CHANGE OF NAME;ASSIGNOR:DSPACE DIGITAL SIGNAL PROCESSING AND CONTROL ENGINEERING GMBH;REEL/FRAME:062202/0014 Effective date: 20211103 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |