WO2018066352A1 - Système, programme et procédé de génération d'image et système, programme et procédé de simulation - Google Patents
Système, programme et procédé de génération d'image et système, programme et procédé de simulation Download PDFInfo
- Publication number
- WO2018066352A1 WO2018066352A1 PCT/JP2017/033729 JP2017033729W WO2018066352A1 WO 2018066352 A1 WO2018066352 A1 WO 2018066352A1 JP 2017033729 W JP2017033729 W JP 2017033729W WO 2018066352 A1 WO2018066352 A1 WO 2018066352A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- image generation
- position information
- shading
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
- G09B29/10—Map spot or coordinate position indicators; Map reading aids
Definitions
- the present invention generates a virtual image of a near-infrared sensor or a LiDAR laser light sensor, and uses the virtual image to simulate a recognition function module for an image that changes with displacement of vehicle position information, a simulation program, and
- the present invention relates to a simulation method.
- an in-vehicle sensor In order to recognize and control the situation around the outside with an in-vehicle sensor, it is possible to determine a plurality of obstacles around the host vehicle or a plurality of types of moving bodies such as vehicles, bicycles, pedestrians, and their positions and speeds. It is necessary to detect the information. Furthermore, it is necessary to determine the meaning of the paint such as the lane marker and the stop line on the road and the meaning of the sign when the vehicle travels. As such an in-vehicle sensor that detects external information around the host vehicle, an image recognition technique using an image sensor of a camera has been considered effective.
- ⁇ Stereo camera The distance is calculated by using the principle of triangulation by using two cameras in the same way as the human eye ⁇
- Infrared depth sensor Irradiation of an infrared pattern, and the reflection is photographed with an infrared camera Calculate the distance from the deviation (phase difference) of the sensor.
- -Ultrasonic sensor Calculate the distance from the time required to transmit the ultrasonic wave and receive the reflected wave.
- -Millimeter wave radar With the same mechanism as the ultrasonic wave. Calculate distance from time taken to transmit millimeter wave radar and receive reflected wave ⁇ LiDAR (Light Detection and Ranging): This is the same mechanism as ultrasonic sensor and millimeter wave radar. use.
- LiDAR Light Detection and Ranging
- the distance is calculated from the time taken to receive the reflected wave (TOF: Time of Flight).
- TOF Time of Flight
- ⁇ Infrared ray depth sensors and ultrasonic sensors are excellent in terms of inexpensiveness, but they are greatly attenuated by distance. Therefore, when the distance to the object is several tens of meters or more, accurate measurement is difficult or measurement itself is impossible. In that respect, millimeter-wave radar and LiDAR are not easily attenuated even over long distances, and therefore high-precision measurements are possible over long distances. However, there is a problem that the device is expensive and difficult to downsize, but it is considered that the mounting on the vehicle will be accelerated by the progress of research and development in the future.
- the present invention solves the above problems, and relates to improvement of recognition rate for other vehicles around the host vehicle, obstacles on the road, objects such as pedestrians.
- An object is to improve the reality of vehicle running tests and sample collection by artificially generating images that are very similar to live-action images under conditions that are difficult to reproduce.
- Another object of the present invention is to construct a plurality of different types of sensors in a virtual environment and generate each video using CG technology. Furthermore, it aims at providing the simulation system, simulation program, and simulation method of the synchronous control using the produced
- the present invention provides a system, a program, and a method for generating a virtual image input to a sensor means as computer graphics,
- a scenario creation unit for creating a scenario relating to the arrangement and behavior of an object present in the virtual image;
- a 3D modeling unit that performs modeling for each object based on the scenario;
- a 3D shading unit that performs shading for each model generated by the modeling unit and generates a shading image for each model;
- a component extraction unit that extracts and outputs a predetermined component included in the shading image as a component image;
- a depth image generation unit that generates a depth image in which a depth is defined based on information on a three-dimensional shape of each object in the component image; It is provided with.
- the component is preferably an R component of the RGB image. Moreover, in the said invention, it is preferable to further provide the gray scale conversion part which makes the said component gray scale.
- the present invention is a system, program, and method for generating a virtual image input to a sensor means as computer graphics,
- a scenario creation unit for creating a scenario relating to the arrangement and behavior of an object present in the virtual image;
- a 3D modeling unit that performs modeling for each object based on the scenario;
- a 3D shading unit that performs shading for each model generated by the modeling unit and generates a shading image for each model;
- a depth image generation unit that generates a depth image in which a depth is defined based on information on a three-dimensional shape of each object;
- the shading part is A function of performing shading only on a predetermined portion of the model from which light rays emitted from the sensor means are reflected;
- a function of outputting only the three-dimensional shape of the predetermined part The depth image generation unit generates a depth image for each object based on information on a three-dimensional shape of the predetermined part.
- the scenario creating means includes means for determining three-dimensional shape information of the object, operation information of the object, material information of the object, parameter information of the light source, camera position information, and sensor position information. It is preferable.
- a one-component image and a depth image based on a real image are acquired as teacher data, and a neural network is obtained by backpropagation based on the component image and the depth image generated by the depth image generation unit and the teacher data. It is preferable to further include deep learning recognition learning means for performing the training.
- an irradiation image and a depth image based on a live-action image are acquired as teacher data, and an image obtained as a result of shading by the shading unit, the depth image generated by the depth image generation unit, and a background based on the teacher data It is preferable to provide deep learning recognition learning means for training a neural network by propagation.
- a TOF calculation unit that calculates the time required for receiving the reflected wave from the irradiation of light rays as TOF information
- a distance image generation unit that generates a distance image based on the TOF information by the TOF calculation unit; It is preferable to further include a comparison and evaluation unit that compares the degree of coincidence between the distance image generated by the distance image generation unit and the depth image generated by the depth image generation unit.
- the modeling unit has a function of acquiring a comparison result by the comparative evaluation unit as feedback information, adjusting a condition in the modeling based on the acquired feedback information, and performing modeling again.
- the feedback information acquisition based on the modeling and the comparison is repeatedly performed until the matching error in the comparison result by the comparison evaluation unit becomes smaller than a predetermined threshold value.
- the present invention is a simulation system, program, and method of a recognition function module for an image that changes with displacement of vehicle position information, Position information acquisition means for acquiring position information of the vehicle relative to surrounding objects based on a detection result by the sensor means; Based on the position information acquired by the position information acquisition means, an image generation means for generating an image for simulation that reproduces the area specified by the position information; Image recognition means for recognizing and detecting a specific object from the simulation image generated by the image generation means using the recognition function module; Position information calculation means for generating a control signal for controlling the operation of the vehicle using the recognition result in the image recognition means, and for changing / correcting the position information of the own vehicle based on the generated control signal; Position information acquisition means, the image generation means, the image recognition means, and synchronization control means for synchronously controlling the position information calculation means are provided.
- the synchronization control means includes: Means for packetizing and sending the position information in a specific format; Means for transmitting packetized data via a network or a transmission bus in a specific device; Means for receiving and de-packetizing the packet data; It is preferable to further comprise means for inputting the de-packetized data and generating an image.
- the synchronization control means transmits and receives signals transmitted and received between the means by using UDP (User Datagram Protocol).
- UDP User Datagram Protocol
- the vehicle position information preferably includes any of XYZ coordinates of the road surface absolute position coordinates of the vehicle, road surface absolute position coordinates XYZ coordinates of the tire, vehicle Euler angles, and wheel rotation angles.
- the image generation means preferably includes means for synthesizing the three-dimensional shape of the vehicle by computer graphics.
- the vehicle is set for a plurality of vehicles, and the recognition function module is operated for each vehicle,
- the position information calculation means uses the information of the recognition result by the recognition means to change / correct the position information of each vehicle for a plurality of vehicles,
- the synchronization control means executes synchronization control for the plurality of vehicles with respect to the position information acquisition means, the image generation means, the image recognition means, and the position information calculation means.
- the image generating means preferably includes means for generating a different image for each sensor means.
- the simulation system includes means for generating images corresponding to a plurality of sensors, and also includes recognition means corresponding to each generated image, and using the plurality of recognition results, It is preferable that a means for performing the synchronization control is provided.
- the program and the method the invention of the image generation system, the image generation program and the image generation method is provided as the image generation means, It is preferable that the depth image generated by the depth image generation unit of the image generation system is input to the image recognition unit as the simulation image.
- a learning sample is generated by artificially generating an image very similar to a real image such as CG.
- the number can be increased, and the recognition rate can be improved by increasing the efficiency of learning.
- the present invention uses a means for generating and synthesizing a highly realistic CG image that is very similar to a live-action image based on a simulation model based on the displacement of the vehicle position information. It is possible to artificially generate an infinite number of images to which a non-existent environment or light source is added.
- This generated CG image can be input to the recognition function module in the same way as a conventional camera image, and the camera image can be processed in the same manner to test whether the target object can be recognized and extracted.
- the application field of the present invention includes an experimental apparatus, a simulator, and a software module and hardware device (for example, a camera mounted on a vehicle, an image sensor, and a three-dimensional shape around the vehicle for automatic driving operation of an automobile.
- Machine learning software such as deep learning, deep learning, and so on.
- the CG technology that realizes a real image in real time and the synchronization technology of synchronization control are provided, it can be widely applied in fields other than the automatic driving operation of automobiles. For example, surgical simulators, military simulators, safe driving tests such as robots and drones, etc. are promising as fields of use.
- FIG. 1 is a block diagram illustrating an overall configuration of an image generation system that generates a virtual image according to a first embodiment. It is explanatory drawing which shows the process which actually drive
- FIG. 1 is a block diagram for generating a near-infrared virtual image.
- the near-infrared virtual image generation system is configured such that various virtual modules are constructed on an arithmetic processing device such as a CPU provided in a computer by executing software installed in the computer, for example.
- This system is implemented.
- module refers to a functional unit that is configured by hardware such as an apparatus or a device, software having the function, or a combination thereof, and achieves a predetermined operation.
- the near-infrared virtual image generation system includes a scenario creation unit 10, a 3D modeling unit 11, a 3D shading unit 12, an R image grayscale conversion unit 13, and a depth image generation unit 14. ing.
- the scenario creation unit 10 is a means for creating scenario data indicating what CG is to be created.
- the scenario creation unit 10 includes means for determining three-dimensional shape information of an object, operation information of the object, material information of the object, parameter information of the light source, camera position information, and sensor position information.
- CG used for automatic driving
- many objects such as roads, buildings, vehicles, pedestrians, bicycles, roadside belts, traffic lights, etc. exist in the virtual space. It is defined in which position (coordinate, altitude) the object is located, in what direction and how it moves, the position of the virtual camera (viewpoint) in the virtual space, the type of light source, This is data that defines the number, the position and orientation of each, and the movement and behavior of the object in the virtual space.
- the scenario creation unit 10 first determines what CG image is to be generated. According to the scenario set by the scenario creation unit 10, the 3D modeling unit 11 generates a 3D image.
- the 3D modeling unit 11 is a module that creates the shape of an object in a virtual space, and sets the coordinates of each vertex constituting the outer shape of the object and the shape of the internal configuration, and an equation that expresses the boundary line / surface of the shape
- the three parameters are set and a three-dimensional object shape is constructed.
- the 3D modeling unit 11 models information such as a 3D shape of a road, a 3D shape of a vehicle traveling on the road, and a 3D shape of a pedestrian.
- the 3D shading unit 12 is a module that generates an actual 3DCG using each 3D model data D101 generated by the 3D modeling unit 11.
- the 3D shading unit 12 expresses a shadow of an object realized by 3DCG by shading processing, and A stereoscopic and realistic image is generated according to the position and the intensity of light.
- the R image gray scale conversion unit 13 is a component extraction unit that extracts a predetermined component included in the shading image transmitted from the 3D shading unit 12, and as a gray scale conversion unit that converts the extracted component image into a gray scale. It is a functioning module. Specifically, the R image gray scale conversion unit 13 extracts a component of the R component in the shading image D103 that is an RGB image transmitted from the 3D shading unit 12, as a component image, and extracts the R component of the extracted R component. As shown in FIG. 4, the component image is converted into a gray scale, and a gray scale image D104 (Img (x, y), x: horizontal coordinate value, y: vertical coordinate value) is output.
- a gray scale image D104 Img (x, y), x: horizontal coordinate value, y: vertical coordinate value
- FIG. 4 is a black-and-white image obtained by converting a real image obtained by photographing a room with a near-infrared sensor into a gray scale.
- the depth image generation unit 14 acquires 3D shape data of each object on the screen based on the modeling information D102 of each 3D shape model input from the 3D shading unit 12, and based on the distance to each object.
- This is a module that generates a depth image 105 (also called depth-map).
- FIG. 5 is an image in which the above depth image is color-coded according to distance. The red component is stronger as the object is further forward, and the blue component is stronger as it is farther away. Those in the middle position change from yellow to green, and depth information regarding all objects in the screen can be obtained.
- the near-infrared virtual image generation method of the present invention can be implemented by operating the near-infrared virtual image generation system having the above configuration.
- the scenario creation unit 10 creates a scenario of what CG is to be created.
- a scenario such as the position of the camera, the type and number of light sources is created.
- the scenario creation unit 10 determines what CG image to generate. Next, information such as the 3D shape of the road, the 3D shape of the vehicle traveling on the road, and the 3D shape of the pedestrian is modeled along the scenario set by the scenario creating unit 10.
- the modeling means for example, a road can be easily realized by using a “high-precision map database”, and a large number of vehicles equipped with an in-vehicle device 1b as shown in FIG.
- the map is converted into 3D from the collected data, and as shown in (c), each road feature is linked into a database using a vectorized drawing.
- the 3D modeling unit 11 acquires or generates a required 3D shape model of each target object based on the scenario information D100 created by the scenario creation unit 10. Then, the 3D shading unit 12 generates an actual 3DCG using each 3D model data D101 generated by the 3D modeling unit 11.
- the R component shading image D103 sent from the 3D shading unit 12 is converted into an R image gray scale as shown in FIG. 4 to obtain a gray scale image D104 (Img (x, y), x: horizontal coordinate value, y: vertical coordinate value) is output.
- a gray scale image D104 Img (x, y), x: horizontal coordinate value, y: vertical coordinate value
- modeling information D102 of each 3D shape model is obtained from the 3D shading unit 12, and 3D shape data of each object in the screen is obtained from these pieces of information.
- the depth image generation unit 14 is obtained.
- a depth image D105 ( ⁇ (x, y), x: horizontal coordinate value, y: vertical coordinate value) is generated.
- the grayscale image D104 and the depth image D105 obtained by the above operation are output as output images of the present embodiment, and these two image outputs are used for image recognition.
- the system according to the present embodiment is realized in FIG. 6, and includes a scenario creation unit 10, a 3D modeling unit 11, a shading unit 15, and a depth image generation unit 16.
- the shading unit 15 is a module that generates an actual 3DCG using each 3D model data D101 generated by the 3D modeling unit 11, and expresses a shadow of an object realized by the 3DCG by shading processing.
- a three-dimensional and realistic image is generated according to the position of the light source and the intensity of light.
- the shading unit 15 in the present embodiment has a laser light irradiation part extraction unit 15a, and the laser light irradiation part extraction unit 15a extracts a 3D shape of only the part irradiated with the laser light to perform shading. Then, the shading image D106 is output. Further, since the reflected light of the laser beam is a light beam that does not have a color component such as RGB in the first place, the shading unit 15 outputs a shaded image D106 that is directly converted into a gray scale.
- the depth image generation unit 16 acquires 3D shape data of each object on the screen based on the modeling information D102 of each 3D shape model input from the 3D shading unit 12, and the distance to each object This is a module for generating a depth image (also called depth-map) 105 based on.
- the depth image generation unit 16 in the present embodiment outputs a depth image D108 in which only a portion related to laser light irradiation is extracted by the laser light irradiation partial extraction unit 16a.
- the LiDAR is a sensor that measures the distance to an object at a long distance by measuring scattered light in response to laser irradiation issued in a pulse form. In particular, it is attracting attention as one of the essential sensors for improving the accuracy of automated driving.
- basic features of LiDAR will be described below.
- the laser light used for LiDAR is near-infrared light (for example, a wavelength of 905 nm) with a micro pulse.
- the scanner and optical meter are composed of a motor, a mirror, a lens, and the like.
- the light receiver and the signal processing unit receive the reflected light and calculate the distance by signal processing.
- LiDAR scanning device 114 As means adopted in LiDAR, there is a LiDAR scanning device 114 called a TOF method (Time of Flight), and this LiDAR scanning device 114 is based on a laser driver 114a as shown in FIG. Based on the control, the laser light is output from the light emitting element 114b through the irradiation lens 114c as an irradiation pulse Pl1.
- the irradiation pulse Pl1 is reflected by the measurement object Ob1, is incident on the light receiving lens 114d as a reflection pulse Pl2, and is detected by the light receiving element 114e.
- the detection result by the light receiving element 114e is output from the LiDAR scanning device 114 as an electric signal by the signal light receiving circuit 114f.
- an ultrashort pulse having a rise time of several ns and an optical peak power of several tens of watts is irradiated toward the measurement object, and the ultrashort pulse is reflected by the measurement object and returned to the light receiving element.
- the basic operation of the LiDAR system is as follows.
- the laser beam emitted from the LiDAR scanning device 114 and reflected by the rotating mirror 114g is reflected from the laser beam.
- the laser beam which is swung left and right or rotated by 360 ° and scanned and reflected back is captured by the light receiving element 114e of the LiDAR scanning device 114 again.
- the captured reflected light is finally obtained as point cloud data PelY and PelX in which the signal intensity corresponding to the rotation angle is shown.
- the central portion is rotated and irradiated with laser light, and scanning of 360 degrees is possible.
- the laser beam of the LiDAR sensor since the laser beam of the LiDAR sensor has a strong directivity, it has a property that it can be irradiated only to a part of the screen even when it is irradiated to a far distance. Therefore, in the shading unit 15 shown in FIG. 6, the 3D shape of only the portion irradiated with the laser beam is extracted and shaded by the laser beam irradiation portion extraction unit 15a, and the shading image D106 is output. .
- the laser light irradiation portion extraction unit 16a outputs a depth image D108 in which only the portion related to laser light irradiation is extracted.
- FIG. 10 shows an example in which a laser negotiator part is extracted, and a laser beam is emitted in a 360-degree direction from LiDAR attached to the upper part of a running vehicle in the center of the image.
- the car is detected by the reflected light upon beam irradiation on the upper left side of the screen
- the pedestrian is detected by the reflected light upon beam irradiation on the upper right side of the screen.
- the shading unit 15 may generate an image as a result of shading the 3D shape of the automobile shown in FIG. 10 using the 3DCG technology.
- the RGB image is generated internally and then only the R component is output.
- the reflected light of the laser beam originally has a color component such as RGB.
- the shaded image D106 that has been directly grayscaled is output because it is a light beam that does not have.
- the depth image generation unit 16 generates the depth image D108 for only the reflection portion of the laser light, whereas the depth image described in the first embodiment is the entire screen.
- the gray scaled shading image D106 and depth image D108 obtained by the above operation are transmitted as output images of the present embodiment. These two image outputs can be used for image recognition and learning of the recognition function.
- the virtual image system using the near-infrared sensor described in the first embodiment and the virtual image system using the LiDAR sensor described in the second embodiment are automatically operated such as a deep learning recognition system. It is applied to AI recognition technology that is widely used, and virtual environment images in an environment that cannot actually be photographed can be supplied to various sensors.
- FIG. 11 is a configuration diagram of a deep learning recognition system using a back-propagation type neural network that is currently considered to have the highest performance.
- the deep learning recognition system according to the present embodiment is roughly configured by a neural network calculation unit 17 and a back propagation unit 18.
- the neural network calculation unit 17 includes a neural network composed of multiple layers as shown in FIG. 12, and the grayscale image D104 and the depth image D105, which are the outputs shown in FIG. 1, are input to this neural network. The Then, non-linear calculation is performed based on coefficients (608, 610) set in advance in the neural network, and a final output 611 is obtained.
- the backpropagation unit 18 receives a calculation value D110 that is a calculation result of the neural network calculation unit 17, and uses teacher data (for example, data such as an irradiation image or a depth image based on a real image) to be compared. Error can be calculated with In the system illustrated in FIG. 11, a grayscale image D111 is input as teacher data for the grayscale image D104, and a depth image D112 is input as teacher data for the depth image D105.
- teacher data for example, data such as an irradiation image or a depth image based on a real image
- the back-propagation unit 18 performs an operation by the back-propagation method.
- This back-propagation method calculates how much error there is between the output of the neural network and the teacher data, and reversely propagates the result to calculate again from the output in the input direction.
- the neural network calculation unit 17 that has received the error value D109 fed back performs a predetermined calculation again and inputs the result to the backpropagation unit 18. The above operations in the loop are executed until the error value becomes smaller than a preset threshold value, and the neural network calculation is terminated when it is determined that the error has sufficiently converged.
- the coefficient values (608, 610) in the neural network in the neural network calculation unit 17 are determined, and deep learning recognition for an actual image can be performed using this neural network. it can.
- the deep learning recognition for the output image of the near-infrared image described in the first embodiment is exemplified.
- the deep learning for the output image of the LiDAR sensor of the second embodiment is performed. Recognition can be handled in exactly the same way.
- the input image at the left end in FIG. 11 is the shading image D106 and the depth image D108 in FIG.
- a fourth embodiment of the present invention will be described.
- a depth image D108 is output from the depth image generation unit 16. How accurate this depth image is as a distance image that actually assumes laser light is very important as an evaluation point of this simulation system.
- an example in which the present invention is applied to an evaluation system for evaluating this depth image will be described.
- the depth image evaluation system As shown in FIG. 13, the depth image evaluation system according to the present embodiment is configured as an evaluation unit for the depth image D108 output from the depth image generation unit 16 described above, and includes a TOF calculation unit 19 and a distance. An image generation unit 20 and a comparative evaluation unit 21 are included.
- the TOF calculation unit 19 is a module that calculates TOF information including the TOF value and the like for the depth image D108 generated by the depth image generation unit 16, and the projection pulse sent from the light source is reflected by the subject, and this This corresponds to a delay time that is a time difference when the reflected pulse is received by the sensor as a received light pulse. This delay time is output from the TOF calculation unit 19 as a TOF value D113.
- the distance image generation unit 20 acquires the TOF of each point of the laser irradiation portion based on the TOF value D113 calculated by the TOF calculation unit 19, and based on the delay time at each point, the distance L to each point And a distance image D114 in which the distance L to each point is represented by an image.
- the comparison evaluation unit 21 performs a comparison calculation between the distance image D114 generated by the distance image generation unit 20 and the depth image D108 input from the depth image generation unit 16, and includes a comparison including the degree of coincidence thereof. This module evaluates based on the results. As a comparison method, a generally used absolute value square error or the like can be used. The larger the value of this comparison result, the greater the difference between the two. Evaluating how close the depth image based on 3DCG modeling is to the distance image actually generated assuming the TOF of laser light can do.
- the operation of the depth image evaluation system having the above-described configuration will be described.
- the time of TOF is calculated.
- This TOF is t described in FIG. More specifically, as shown in FIG. 14A, the TOF is reflected by a subject when the laser light is emitted from the light source in the form of a pulse as a light projection pulse. The pulse is received by the sensor as a received light pulse. The time difference at that time is measured. This time difference corresponds to a delay time between the light projection pulse and the light reception pulse, as shown in FIG.
- the TOF value D113 calculated by the TOF calculation unit 19 in FIG. 6 is output.
- the distance image generation unit 20 generates the distance image D114 of each point of the irradiation unit image by the above calculation. Thereafter, a comparison calculation is performed between the depth image D108 and the distance image D114.
- a comparison means a commonly used absolute value square error may be used. The larger the value, the greater the difference between the two.
- the depth image based on 3D CG modeling is actually the distance image generated assuming the TOF of laser light (this is correct). It is possible to evaluate whether the degree is close.
- the comparison result D115 may be a numerical value such as an absolute value square error, or may be a signal that both are not approximated after the threshold processing. In the latter case, for example, the result may be fed back to the 3D modeling unit 11 in FIG. By repeatedly executing this processing operation to a predetermined approximate level, it is possible to generate a depth image based on highly accurate 3D-CG.
- the autonomous driving system is a system such as ADAS (advanced driver system) that detects and avoids the possibility of accidents in advance, and in order to realize the automatic driving of the car, It recognizes the camera video mounted on the vehicle using image recognition technology, detects other vehicles, pedestrians, traffic lights, and other objects, and automatically performs control such as speed reduction and avoidance.
- ADAS advanced driver system
- FIG. 15 is a conceptual diagram showing the overall configuration of the simulator system according to the present embodiment.
- the simulator system according to the present embodiment executes a simulation program for one or a plurality of simulation targets, and executes tests and machine learning of these simulator programs.
- a simulator server 2 is arranged on a communication network 3, and an information processing terminal 1 a that generates or acquires the position of the own vehicle through the communication network 3 with respect to the simulator server 2.
- the in-vehicle device 1b is connected.
- the communication network 3 is an IP network using the communication protocol TCP / IP, and includes various communication lines (telephone lines, ISDN lines, ADSL lines, optical lines such as optical lines, dedicated lines, WCDMA (registered trademark)). ) And 3rd generation (3G) communication systems such as CDMA2000, 4th generation (4G) communication systems such as LTE, and 5th generation (5G) and later communication systems, as well as WiFi (registered trademark) and Bluetooth.
- a wireless communication network such as (registered trademark) is a distributed communication network constructed by connecting each other.
- This IP network includes a LAN such as an intranet (in-company network) or a home network based on 10BASE-T or 100BASE-TX.
- simulator software is installed in the PC 1a. In this case, simulation with a single PC can be performed.
- the simulation execution unit 205 generates a simulation image that reproduces the area specified by the position information based on the position information generated or acquired by the position information acquisition unit on the client device 1 side and transmitted to the simulator server 2 side.
- a specific object is recognized and detected from the generated simulation image using a recognition function module.
- an image generation unit 203 and an image recognition unit 204 are provided.
- the recognition function module 204a of the image recognition unit here, the neural network calculation unit 17 of the virtual image deep learning recognition system described in the third embodiment can be applied, and as the learning unit 204b, The above-described back propagation unit 18 can be applied.
- the vehicle position information calculation unit 51 first sends the vehicle position information D02 of the own vehicle to the UDP synchronization control unit 202 according to the timing of the control signal D03 from the UDP synchronization control unit 202.
- initial data of the vehicle position information calculation unit 51 for example, map data, position information of the own vehicle in the map, and information such as the rotation angle and diameter of the wheel of the vehicle body can be loaded from a predetermined storage device 101.
- the UDP synchronization control unit 202 and the UDP information transmission / reception unit 206 transmit and receive data between them in cooperation with the client side execution unit 102a on the client device 1 side.
- the packetized data is transmitted via a network or a transmission bus in a specific device, and the simulator server 2 receives the packet data to be de-packetized (S103), and the de-packetized data D05 is input to the image generation unit 203 of the simulation execution unit 205 to generate a CG image.
- the UDP information transmission / reception unit 206 transmits / receives packet information D04 in which various data groups including vehicle information are packetized between the devices by the UDP synchronization control unit 202 using UDP (User Datagram Protocol).
- the vehicle information for example, various information such as XYZ coordinates that are vehicle position information, XYZ coordinates that are tire position information, and Euler angles are mainly used as vehicles.
- the data D05 necessary for generating the CG image is sent out.
- the packet information D04 in which various data groups are UDP packets is divided into a packet header and a data body payload by the de-packetizing process in the UDP information transmission / reception unit 206.
- the exchange of UDP packet data may be performed using a network between distant locations, or may be performed between transmission buses within a single device such as a simulator.
- Data D05 corresponding to the payload is input to the image generation unit 203 of the simulation execution unit 205 (S104).
- the image generation unit 203 acquires the position information acquired or calculated by the position information acquisition unit on the client device 1 side as data D05, and is specified by the position information based on the position information.
- a simulation image in which the region (the landscape based on the latitude / longitude, direction, and field of view on the map) is reproduced by computer graphics is generated (S105).
- the simulation image D13 generated by the image generation unit 203 is sent to the image recognition unit 204.
- the image generation unit 203 as a predetermined image generation method, for example, a CG image generation technique using the latest physical rendering (PBR) method is used to generate a realistic image.
- the recognition result information D06 is input again to the vehicle position information calculation unit 51, and is used, for example, to calculate vehicle position information for determining the next operation of the host vehicle.
- the image generation unit 203 can generate not only vehicles but also surrounding images, for example, objects such as road surfaces, buildings, traffic lights, other vehicles, and pedestrians, using, for example, the CG technique using the PBR method.
- This is a title by a game machine such as PlayStation, and since the objects as described above are generated very realistically, it can be understood that the latest CG technology can be sufficiently realized.
- an image of an object other than the own vehicle is already stored as initial data.
- an automatic driving simulator a large amount of sample data on highways and ordinary roads is stored in a database, and these data may be used as appropriate.
- the image recognition unit 204 recognizes and extracts a specific target object as an object from the simulation image generated by the image generation unit 203 using the recognition function module 204a that is a test target or a machine learning target. (S106). If there is no recognized object (“N” in step S107), the process proceeds to the next time frame (S109), and the above processing S101 to S107 is performed until there is no time frame (“N” in step S109). Is repeated ("Y" in step S109).
- step S107 if a recognized object exists in step S107 (“Y” in step S107), the recognition result by the image recognition unit 204 is recognized as recognition result information D06, and the vehicle position information calculation unit 51 on the client device 1 side. Sent to. Then, the vehicle position information calculation unit 51 on the client device 1 side acquires the recognition result information D06 in the image recognition unit 204 through the UDP information transmission / reception unit 206, and controls to control the operation of the vehicle using the acquired recognition result. A signal is generated, and the position information of the host vehicle is changed / corrected based on the generated control signal (S108).
- the simulation image D13 which is a CG image generated here, is input to the image recognition unit 204 to perform object recognition and detection using a recognition technique such as deep learning as described above.
- the obtained recognition result is given by area information on the screen (for example, XY two-dimensional coordinates of the extracted rectangular area) such as other vehicles, pedestrians, signs, and traffic lights.
- a simulator for automatic driving When executing a simulator for automatic driving, there are many objects (objects) such as other vehicles, pedestrians, buildings, and road surfaces in an image during actual vehicle driving.
- objects such as other vehicles, pedestrians, buildings, and road surfaces in an image during actual vehicle driving.
- automatic driving for example, automatically turning the steering wheel or stepping on the accelerator while acquiring real-time information from various sensors such as camera images mounted on the vehicle, millimeter waves, radar waves, etc. Perform actions such as applying a brake.
- the image recognition unit 204 detects approach by image recognition technology and outputs recognition result information D06 of the recognition result to the vehicle position information calculation unit 51. To do. Based on this information, the vehicle position information calculation unit 51 changes the position information of the host vehicle by, for example, performing an operation such as turning the steering wheel to avoid it or decelerating by a brake operation. Alternatively, when a pedestrian suddenly jumps out in front of his / her own vehicle, he / she performs operations such as turning off the steering wheel and avoiding sudden braking, and similarly changing the position information of the own vehicle as a result. .
- data transmission from the vehicle position information calculation unit 51 to the simulation execution unit 205 via the UDP synchronization control unit 202 and the UDP information transmission / reception unit 206 is performed at a cycle of 25 msec, for example, according to the UDP protocol. It can be sent out (25 msec is an example).
- the vehicle position information of the next time frame is determined based on the output result from the simulation execution unit 205, so the whole is synchronized. This is because the behavior of an actual vehicle cannot be simulated if it cannot be controlled. Although transmission is performed at a cycle of every 25 msec, it is ideally zero delay, but practically impossible. Therefore, the use of UDP reduces delay time associated with transmission / reception.
- FIG. 20 is a conceptual diagram showing the overall configuration of the system according to the present embodiment.
- FIG. 21 is a block diagram showing an internal configuration of the apparatus according to the present embodiment.
- the embodiment is mainly limited to the case where the number of own vehicles is one. However, in this embodiment, a case where position information for a large number of vehicles is processed simultaneously in parallel is illustrated. is doing.
- a plurality of client devices 1c to 1f are connected to the simulator server 2.
- the UDP synchronization control unit 202 and UDP information transmission / reception are performed.
- the unit 206 is a common component, and the vehicle position information calculation units 51c to 51f are provided in the client devices 1c to f according to the number of vehicles to be simulated, and the simulation execution unit 205c is provided on the simulator server 2 side. To f are provided.
- the vehicle position information calculation units 51c to 51f send vehicle position information D02c to f of the own vehicle to the UDP synchronization control unit 202 according to the timing of the control signals D03c to f.
- the UDP synchronization control unit 202 converts the vehicle position information D02c to f of the own vehicle into packet information D04 including various data groups by UDP packetizing. This facilitates transmission / reception using the UDP protocol.
- the packet information D04 is divided into a packet header and a data body payload by the de-packetizing process in the UDP information transmitting / receiving unit 206.
- the exchange of UDP packet data may be performed using a network between distant locations, or may be performed between transmission buses within a single device such as a simulator.
- Data D05c to f corresponding to the payload are input to the simulation execution units 205c to 205f.
- the PC terminals 1c to 1f and the vehicle synchronization simulator program 4 are remotely connected via the communication network 3, but the program is mounted on a recording medium such as a local HDD or SSD of the PC. It can also be operated stand-alone. In this case, there is an advantage that the verification can be performed with a lower delay, and there is an advantage that the network bandwidth is not affected by the congestion caused when the network bandwidth is congested.
- 1c it is not necessary to limit 1c to 1f as a PC terminal.
- a car navigation system mounted on the test vehicle may be used.
- the simulation image D13 which is a CG image from the image generation unit 203 in FIG. 18, is not recognized by the image recognition unit 204, but a live-action running video is input instead of D13. It can be used for performance evaluation of the image recognition unit 204. For example, pedestrians and vehicles in live-action running images can be recognized instantly and accurately when viewed by humans, but are the results recognized and extracted by the image recognition unit 204 described above the same? This is because it can be verified.
- the first deep learning recognition unit 61 uses, for example, an image sensor unit, and the 3D graphics composite image is a two-dimensional plane image. Therefore, the deep learning recognition means includes a recognition method for a two-dimensional image.
- the next deep learning recognition unit 62 is 3D point cloud data input using a LiDAR sensor. The 3D point cloud data is converted into a 3D graphic image by the image generation unit 203.
- the 3D point cloud data graphic image D61 generated by the above means is input to the deep learning recognition unit 62, where recognition is performed by the recognition means learned for 3D point cloud data. Accordingly, a means different from the deep learning recognition means learned from the image for the image sensor is used, but the effect is great. This is because it is highly possible that an oncoming vehicle that is very far away cannot be acquired by an image sensor, but in the case of LiDAR, the size and shape of an oncoming vehicle several hundred meters away can be acquired. Conversely, LiDAR uses reflected light, so there is a disadvantage that it is not effective for an object that does not reflect, but this problem does not occur in the case of an image sensor.
- this synchronization unit may be performed outside a network such as a cloud. The reason is that not only will the number of sensors per unit increase rapidly in the future, but the computational load of deep learning recognition processing will also be large, so parts that can be handled externally will be executed in a cloud with large-scale computing power This is because a means for feeding back the result is effective.
- the material photographing device assumes a LiDAR sensor and a millimeter wave sensor as described above, in addition to the image sensor provided in the in-vehicle camera.
- a high-quality CG image is generated by the PBR technique described in the first embodiment using parameters such as light information extracted from a captured image after capturing the captured image. Sent out.
- the LiDAR sensor three-dimensional point cloud data is created from the reflected light of the laser beam actually emitted from the in-vehicle LiDAR sensor. An image obtained by converting the three-dimensional point cloud data into 3DCG is output from the image generation unit 203.
- CG images corresponding to a plurality of types of sensors are sent from the image generation unit 203, and recognition processing is performed by predetermined means in each deep learning recognition unit in FIG.
- the LiDAR sensor has been described as an example, but it is also effective to use the near infrared sensor described in the second embodiment.
- Laser light irradiation partial extraction unit 16 Depth image generation unit 16a ... Laser light irradiation part extraction unit 17 ... Neural network calculation unit 18 ... Back propagation unit 19 ... TOF calculation unit 20 ; Distance image generation unit 21 ... Comparison evaluation unit 51 (51c to f) ... Vehicle position information Calculator 61 ⁇ 6n ... deep learning recognition unit 84 ... learning result synchronization section 101 ... storage device 102 ... CPU DESCRIPTION OF SYMBOLS 102a ... Client side execution part 103 ... Memory 104 ... Input interface 105 ... Output interface 106, 201 ... Communication interface 202 ... UDP synchronous control part 203 ... Image generation part 204 ... Image recognition part 204a ...
- Recognition function module 204b ... Learning part 205 ... Simulation execution unit 205c to f ... Simulation execution unit 206 ... UDP information transmission / reception unit 210 ... Map database 210-213 ... Species database 211 ... Vehicle database 212 ... Drawing database 402 ... CPU 611 ... Output
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Traffic Control Systems (AREA)
Abstract
Le problème décrit par la présente invention est de générer, par le biais de techniques de synthèse infographiques, des images de capteur proche infrarouge ou de capteur de lumière laser LiDAR extrêmement similaires à des images réellement imagées, et d'utiliser lesdites images synthétisées par infographie en vue de permettre d'exécuter des simulations d'un module de fonction de reconnaissance sur des images en modification par déplacement dans des informations de position de véhicule. La solution de l'invention porte sur un système utilisant des techniques infographiques en vue de générer une image de capteur virtuel. L'infographie comprend : un moyen de création d'un scénario d'un objet présent dans l'image ; un moyen de réalisation d'une modélisation pour chaque objet dans l'infographie en fonction d'un scénario ; un moyen de réalisation d'un ombrage pour chaque modèle du résultat de modélisation ; un moyen d'émission en sortie d'un composant uniquement d'une image ombrée ; et un moyen de génération d'une image de profondeur en fonction d'informations de forme tridimensionnelle pour chaque objet dans l'infographie.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/367,258 US20190236380A1 (en) | 2016-10-06 | 2019-03-28 | Image generation system, program and method, and simulation system, program and method |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2016-197999 | 2016-10-06 | ||
| JP2016197999 | 2016-10-06 | ||
| JP2017-092950 | 2017-05-09 | ||
| JP2017092950A JP6548691B2 (ja) | 2016-10-06 | 2017-05-09 | 画像生成システム、プログラム及び方法並びにシミュレーションシステム、プログラム及び方法 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/367,258 Continuation US20190236380A1 (en) | 2016-10-06 | 2019-03-28 | Image generation system, program and method, and simulation system, program and method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018066352A1 true WO2018066352A1 (fr) | 2018-04-12 |
Family
ID=61831186
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2017/033729 Ceased WO2018066352A1 (fr) | 2016-10-06 | 2017-09-19 | Système, programme et procédé de génération d'image et système, programme et procédé de simulation |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018066352A1 (fr) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109102537A (zh) * | 2018-06-25 | 2018-12-28 | 中德人工智能研究院有限公司 | 一种激光雷达和球幕相机结合的三维建模方法和系统 |
| CN109828267A (zh) * | 2019-02-25 | 2019-05-31 | 国电南瑞科技股份有限公司 | 基于实例分割和深度摄像头的变电站巡检机器人障碍物检测和测距方法 |
| CN111523409A (zh) * | 2020-04-09 | 2020-08-11 | 北京百度网讯科技有限公司 | 用于生成位置信息的方法和装置 |
| CN111612071A (zh) * | 2020-05-21 | 2020-09-01 | 北京华睿盛德科技有限公司 | 一种从曲面零件阴影图生成深度图的深度学习方法 |
| JPWO2020250620A1 (fr) * | 2019-06-14 | 2020-12-17 | ||
| CN113299104A (zh) * | 2021-04-20 | 2021-08-24 | 湖南海龙国际智能科技股份有限公司 | 一种增强现实的反向寻车系统及方法 |
| US11354461B2 (en) | 2018-09-07 | 2022-06-07 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and device for simulating a distribution of obstacles |
| CN118967948A (zh) * | 2024-10-15 | 2024-11-15 | 园测信息科技股份有限公司 | 三维场景程序化建模方法及装置 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004355272A (ja) * | 2003-05-28 | 2004-12-16 | Toyota Motor Corp | 緊急車両の接近通知システム |
| JP2007066045A (ja) * | 2005-08-31 | 2007-03-15 | Hitachi Ltd | シミュレーション装置 |
| JP2014229303A (ja) * | 2013-05-20 | 2014-12-08 | 三菱電機株式会社 | シーン内の物体を検出する方法 |
| JP2015141721A (ja) * | 2014-01-29 | 2015-08-03 | コンチネンタル オートモーティブ システムズ インコーポレイテッドContinental Automotive Systems, Inc. | 後退時衝突回避システムにおける誤作動を低減するための方法 |
-
2017
- 2017-09-19 WO PCT/JP2017/033729 patent/WO2018066352A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004355272A (ja) * | 2003-05-28 | 2004-12-16 | Toyota Motor Corp | 緊急車両の接近通知システム |
| JP2007066045A (ja) * | 2005-08-31 | 2007-03-15 | Hitachi Ltd | シミュレーション装置 |
| JP2014229303A (ja) * | 2013-05-20 | 2014-12-08 | 三菱電機株式会社 | シーン内の物体を検出する方法 |
| JP2015141721A (ja) * | 2014-01-29 | 2015-08-03 | コンチネンタル オートモーティブ システムズ インコーポレイテッドContinental Automotive Systems, Inc. | 後退時衝突回避システムにおける誤作動を低減するための方法 |
Non-Patent Citations (1)
| Title |
|---|
| MATSUMOTO, YOSHIO ET AL.: "Proposal of Mobile Robot Simulator Using VR Technology and Its Application to View-Based Robot Navigation", JOURNAL OF THE ROBOTICS SOCIETY OF JAPAN, vol. 20, no. 5, 15 July 2002 (2002-07-15), pages 497 - 505 * |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109102537A (zh) * | 2018-06-25 | 2018-12-28 | 中德人工智能研究院有限公司 | 一种激光雷达和球幕相机结合的三维建模方法和系统 |
| CN109102537B (zh) * | 2018-06-25 | 2020-03-20 | 中德人工智能研究院有限公司 | 一种二维激光雷达和球幕相机结合的三维建模方法和系统 |
| US11354461B2 (en) | 2018-09-07 | 2022-06-07 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and device for simulating a distribution of obstacles |
| CN109828267A (zh) * | 2019-02-25 | 2019-05-31 | 国电南瑞科技股份有限公司 | 基于实例分割和深度摄像头的变电站巡检机器人障碍物检测和测距方法 |
| WO2020250620A1 (fr) * | 2019-06-14 | 2020-12-17 | 富士フイルム株式会社 | Dispositif de traitement de données de nuage de points, procédé de traitement de données de nuage de points, et programme |
| JPWO2020250620A1 (fr) * | 2019-06-14 | 2020-12-17 | ||
| JP7344289B2 (ja) | 2019-06-14 | 2023-09-13 | 富士フイルム株式会社 | 点群データ処理装置、点群データ処理方法及びプログラム |
| US12277741B2 (en) | 2019-06-14 | 2025-04-15 | Fujifilm Corporation | Point cloud data processing apparatus, point cloud data processing method, and program |
| CN111523409A (zh) * | 2020-04-09 | 2020-08-11 | 北京百度网讯科技有限公司 | 用于生成位置信息的方法和装置 |
| CN111523409B (zh) * | 2020-04-09 | 2023-08-29 | 北京百度网讯科技有限公司 | 用于生成位置信息的方法和装置 |
| CN111612071A (zh) * | 2020-05-21 | 2020-09-01 | 北京华睿盛德科技有限公司 | 一种从曲面零件阴影图生成深度图的深度学习方法 |
| CN111612071B (zh) * | 2020-05-21 | 2024-02-02 | 北京华睿盛德科技有限公司 | 一种从曲面零件阴影图生成深度图的深度学习方法 |
| CN113299104A (zh) * | 2021-04-20 | 2021-08-24 | 湖南海龙国际智能科技股份有限公司 | 一种增强现实的反向寻车系统及方法 |
| CN113299104B (zh) * | 2021-04-20 | 2022-05-06 | 湖南海龙国际智能科技股份有限公司 | 一种增强现实的反向寻车系统及方法 |
| CN118967948A (zh) * | 2024-10-15 | 2024-11-15 | 园测信息科技股份有限公司 | 三维场景程序化建模方法及装置 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6548691B2 (ja) | 画像生成システム、プログラム及び方法並びにシミュレーションシステム、プログラム及び方法 | |
| US12020476B2 (en) | Data synthesis for autonomous control systems | |
| US20230251656A1 (en) | Generating environmental parameters based on sensor data using machine learning | |
| WO2018066352A1 (fr) | Système, programme et procédé de génération d'image et système, programme et procédé de simulation | |
| CN110363820B (zh) | 一种基于激光雷达、图像前融合的目标检测方法 | |
| US10943355B2 (en) | Systems and methods for detecting an object velocity | |
| TWI703064B (zh) | 用於在不良照明狀況下定位運輸工具的系統和方法 | |
| WO2018066351A1 (fr) | Système, programme et procédé de simulation | |
| CN107235044B (zh) | 一种基于多传感数据实现对道路交通场景和司机驾驶行为的还原方法 | |
| US11798289B2 (en) | Streaming object detection and segmentation with polar pillars | |
| JP2024055902A (ja) | 手続き的な世界の生成 | |
| KR20210058696A (ko) | 3d 대상체 검출을 위한 순차 융합 | |
| US20220371606A1 (en) | Streaming object detection and segmentation with polar pillars | |
| EP3839823A1 (fr) | Intégration de données à partir de plusieurs capteurs | |
| TW202430840A (zh) | 路徑規劃系統及其路徑規劃方法 | |
| CN117591847B (zh) | 基于车况数据的模型指向评测方法和装置 | |
| WO2022141294A1 (fr) | Procédé et système de test de simulation, simulateur, support de stockage et produit-programme | |
| WO2022246273A1 (fr) | Détection et segmentation d'objet de diffusion avec des piliers polaires | |
| CN118627201A (zh) | 一种智能驾驶控制器的传感器仿真建模方法及装置 | |
| CN116451590A (zh) | 自动驾驶仿真测试平台的仿真方法及装置 | |
| CN115690728A (zh) | 目标速度确定方法、装置、设备、存储介质及车辆 | |
| CN117666553A (zh) | 车辆自动驾驶控制方法、系统和路侧感知设备 | |
| Hossain et al. | RGB2BEV-Net: A PyTorch-Based End-to-End Pipeline for RGB to BEV Segmentation Using an Extended Dataset for Autonomous Driving | |
| CN117593686B (zh) | 基于车况真值数据的模型评测方法和装置 | |
| CN120337786B (zh) | 基于虚拟现实与仿真的神经网络模型训练方法及相关设备 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17858191 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17858191 Country of ref document: EP Kind code of ref document: A1 |