CN118627201A - A sensor simulation modeling method and device for intelligent driving controller - Google Patents
A sensor simulation modeling method and device for intelligent driving controller Download PDFInfo
- Publication number
- CN118627201A CN118627201A CN202410961512.8A CN202410961512A CN118627201A CN 118627201 A CN118627201 A CN 118627201A CN 202410961512 A CN202410961512 A CN 202410961512A CN 118627201 A CN118627201 A CN 118627201A
- Authority
- CN
- China
- Prior art keywords
- sensor
- simulation
- modeling method
- simulation modeling
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/15—Vehicle, aircraft or watercraft design
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/18—Details relating to CAD techniques using virtual or augmented reality
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Traffic Control Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
本发明公开了一种智能驾驶控制器的传感器仿真建模方法及装置,传感器仿真建模方法包括如下步骤:S1、首先从采集的数据中,在数字世界中重建自动驾驶场景,包括汽车、行人、道路、建筑和交通标志,控制重建的场景进行仿真,生成一些罕见的关键场景;S2、虚拟环境下理想环境传感器的仿真建模,包括静态背景、动态物体和区域外物体;S3、虚拟环境下物理感知传感器的仿真建模,包括摄像头仿真、激光雷达仿真、毫米波雷达仿真;S4、虚拟环境下混合传感器仿真建模,将目标列表/环境参数作为输入,目标列表/传感器原始数据作为输出;S5、传感器功能模型的仿真验证。本申请的传感器仿真建模能够准确地模拟真实世界,实现无人车和环境的自由交互效果。
The present invention discloses a sensor simulation modeling method and device for an intelligent driving controller, and the sensor simulation modeling method includes the following steps: S1, firstly, reconstructing the autonomous driving scene in the digital world from the collected data, including cars, pedestrians, roads, buildings and traffic signs, controlling the reconstructed scene to simulate, and generating some rare key scenes; S2, simulation modeling of ideal environment sensors in a virtual environment, including static background, dynamic objects and objects outside the area; S3, simulation modeling of physical perception sensors in a virtual environment, including camera simulation, laser radar simulation, and millimeter wave radar simulation; S4, simulation modeling of hybrid sensors in a virtual environment, taking target list/environmental parameters as input and target list/sensor raw data as output; S5, simulation verification of sensor function model. The sensor simulation modeling of the present application can accurately simulate the real world and realize the free interaction effect between the unmanned vehicle and the environment.
Description
技术领域Technical Field
本发明涉及智能驾驶技术领域,特别涉及一种智能驾驶控制器的传感器仿真建模方法及装置。The present invention relates to the field of intelligent driving technology, and in particular to a sensor simulation modeling method and device for an intelligent driving controller.
背景技术Background Art
近些年来,随着自动驾驶技术的突飞猛进,无人车能够在大部分常规情景下有很好的表现。但是目前的技术仍难保证安全的部署,究其原因是在真实世界存在着很多安全关键的场景而这些边界又是至关重要的。仿真测试变成了一种行之有效的手段,它能帮助研究者们能够以低成本的方式来生成大量的边界场景,从而全方位的测试和训练已有自动驾驶模型。由于无人车通过装配各种传感器来感知真实世界,真实的可拓展的传感器仿真变成为整个仿真系统重要的一环。In recent years, with the rapid development of autonomous driving technology, unmanned vehicles can perform well in most common scenarios. However, current technology still cannot guarantee safe deployment. The reason is that there are many safety-critical scenarios in the real world and these boundaries are crucial. Simulation testing has become an effective means to help researchers generate a large number of boundary scenarios in a low-cost manner, thereby comprehensively testing and training existing autonomous driving models. Since unmanned vehicles perceive the real world by equipping various sensors, real and scalable sensor simulation has become an important part of the entire simulation system.
自动驾驶汽车通过传感器感知周围环境,并将信息传输到路径规划阶段,决定和预测未来的行为,并相应地控制车辆的横向和纵向的驱动系统。为了让自动驾驶汽车在真实道路上顺利行驶,并提高感知的准确度,多传感器感知和融合技术得到了广泛应用,为准确了解周围环境并相应地传输数据奠定了基础。Self-driving cars perceive the surrounding environment through sensors and transmit the information to the path planning stage to determine and predict future behavior and control the vehicle's lateral and longitudinal drive systems accordingly. In order to allow self-driving cars to drive smoothly on real roads and improve the accuracy of perception, multi-sensor perception and fusion technology has been widely used, laying the foundation for accurately understanding the surrounding environment and transmitting data accordingly.
阻碍自动驾驶普及的一个重要原因是安全性仍然不够,真实世界过于复杂,尤其是存在长尾效应。边界场景对安全驾驶至关重要,很多样,但又很难遇到。测试自动驾驶系统在这些场景的表现非常困难,因为这些场景很难遇到,而且在真实世界中测试非常昂贵和危险One of the main reasons that hinder the popularization of autonomous driving is that safety is still insufficient and the real world is too complex, especially with the long tail effect. Boundary scenarios are crucial for safe driving, they are diverse, but difficult to encounter. It is very difficult to test the performance of autonomous driving systems in these scenarios because these scenarios are difficult to encounter and it is very expensive and dangerous to test them in the real world.
为了解决这个挑战,工业界和学术界都开始重视仿真系统的开发。自动驾驶汽车中使用了多类传感器,例如摄像头、雷达、激光雷达、超声波等,可用于多种不同用途和探测范围。实际的自动驾驶系统,需要通过传感器融合技术实现高可靠性环境感知,以便有效地完成路径规划和行为预测。但现有技术中的智能驾驶控制器的传感器难以准确地模拟真实世界,无人车和环境的自由交互效果较差。In order to solve this challenge, both industry and academia have begun to pay attention to the development of simulation systems. Many types of sensors are used in self-driving cars, such as cameras, radars, lidars, ultrasonic waves, etc., which can be used for a variety of different purposes and detection ranges. The actual self-driving system needs to achieve high-reliability environmental perception through sensor fusion technology in order to effectively complete path planning and behavior prediction. However, the sensors of the intelligent driving controller in the existing technology are difficult to accurately simulate the real world, and the free interaction between the unmanned vehicle and the environment is poor.
发明内容Summary of the invention
为了解决上述问题,本发明提供一种智能驾驶控制器的传感器仿真建模方法及装置,以解决上述背景技术中提出的问题。In order to solve the above problems, the present invention provides a sensor simulation modeling method and device for an intelligent driving controller to solve the problems raised in the above background technology.
本发明的上述技术目的是通过以下技术方案得以实现的:一种智能驾驶控制器的传感器仿真建模,包括如下步骤:The above technical objectives of the present invention are achieved through the following technical solutions: A sensor simulation modeling of an intelligent driving controller comprises the following steps:
S1、首先从采集的数据中,在数字世界中重建自动驾驶场景,包括汽车、行人、道路、建筑和交通标志,控制重建的场景进行仿真,生成一些罕见的关键场景;S1. First, reconstruct the autonomous driving scene in the digital world from the collected data, including cars, pedestrians, roads, buildings and traffic signs, control the reconstructed scene for simulation, and generate some rare key scenes;
S2、虚拟环境下理想环境传感器的仿真建模,包括静态背景、动态物体和区域外物体;S2, simulation modeling of ideal environmental sensors in virtual environments, including static background, dynamic objects and objects outside the area;
S3、虚拟环境下物理感知传感器的仿真建模,包括摄像头仿真、激光雷达仿真、毫米波雷达仿真;S3, simulation modeling of physical perception sensors in virtual environments, including camera simulation, lidar simulation, and millimeter-wave radar simulation;
S4、虚拟环境下混合传感器仿真建模,将目标列表/环境参数作为输入,目标列表/传感器原始数据作为输出;S4, hybrid sensor simulation modeling in virtual environment, taking target list/environmental parameters as input and target list/sensor raw data as output;
S5、传感器功能模型的仿真验证。S5. Simulation verification of sensor functional model.
进一步的,步骤S1中,传感器功能模型框架的功能模块由对象提取、遮挡模拟、输出模拟和误差模拟组成,对象提取按照传感器的位置、感知范围、感知对象类型从仿真场景中快速提取出被感知对象;Further, in step S1, the functional modules of the sensor functional model framework are composed of object extraction, occlusion simulation, output simulation and error simulation, and the object extraction quickly extracts the sensed object from the simulation scene according to the position, sensing range and sensing object type of the sensor;
遮挡模拟,根据对象的几何轮廓和它们相对传感器的位置关系,按照图形几何学确定出最终的可见对象;Occlusion simulation, based on the geometric outlines of objects and their relative position to the sensor, determines the final visible objects according to the graphic geometry;
输出模拟,根据传感器自身特性、被感知对象的输出格式,为需要对外输出的可见对象计算理想的输出数据,得到理想待输出对象;Output simulation, based on the characteristics of the sensor itself and the output format of the perceived object, calculates the ideal output data for the visible object that needs to be output externally, and obtains the ideal output object;
误差模拟,表征传感器特性,在理想待输出对象的输出数据上加入高斯白噪声,形成最终的感知输出。Error simulation characterizes the sensor characteristics and adds Gaussian white noise to the output data of the ideal output object to form the final perception output.
进一步的,所述静态背景包括建筑、道路和交通标志;所述动态物体包括行人和汽车;所述区域外物体包括天空和道路。Furthermore, the static background includes buildings, roads and traffic signs; the dynamic objects include pedestrians and cars; and the objects outside the area include the sky and roads.
进一步的,步骤S2中,动态物体建模时,使用网络生成相应的物特征,共享形状信息,从而生成完整的汽车形状。Furthermore, in step S2, when modeling the dynamic object, the network is used to generate corresponding object features and share shape information, thereby generating a complete car shape.
进一步的,步骤S3中,摄像头仿真基于环境物体的几何空间生成逼真的图像,再根据物体的真实材质与纹理,通过计算机图形学对三维模型添加颜色和光学属性等,来仿真模拟图像合成;激光雷达仿真参照真实激光雷达的扫描方式,模拟出激光雷达发射出和接收到的每一条射线,并且还要对发射出的射线与场景中所有物体求交;毫米波雷达仿真根据测试车辆所配置雷达的视场角和分辨率信息,向不同方向发射一系列虚拟连续调频毫米波,并接收目标的反射信号。Furthermore, in step S3, the camera simulation generates realistic images based on the geometric space of environmental objects, and then adds color and optical properties to the three-dimensional model through computer graphics according to the real material and texture of the object to simulate image synthesis; the lidar simulation refers to the scanning method of the real lidar, simulates each ray emitted and received by the lidar, and also intersects the emitted rays with all objects in the scene; the millimeter-wave radar simulation emits a series of virtual continuous frequency-modulated millimeter waves in different directions according to the field of view and resolution information of the radar configured for the test vehicle, and receives the reflected signal of the target.
进一步的,步骤S4中,混合传感器在理想传感器的基础上考虑噪声和目标物属性等因素,在理想待输出对象的数据信息中增加误差信号;对传感器工作链路进行部分物理级建模,输出传感器原始数据。Furthermore, in step S4, the hybrid sensor considers factors such as noise and target object properties on the basis of the ideal sensor, and adds an error signal to the data information of the ideal output object; performs partial physical level modeling on the sensor working link, and outputs the sensor raw data.
进一步的,步骤S5中,根据物理机制对毫米波雷达传感器探测性能进行仿真,通过对毫米波雷达传感器模型和真实毫米波雷达测量数据的拟合,采用统计方法对距离误差、速度误差和角度误差进行对比分析,验证了毫米波雷达传感器模型的准确性。Furthermore, in step S5, the detection performance of the millimeter-wave radar sensor is simulated according to the physical mechanism, and the accuracy of the millimeter-wave radar sensor model is verified by fitting the millimeter-wave radar sensor model and the actual millimeter-wave radar measurement data and using statistical methods to compare and analyze the distance error, speed error and angle error.
本申请还公开了一种智能驾驶控制器的传感器仿真建模装置,采用上述仿真建模方法。The present application also discloses a sensor simulation modeling device for an intelligent driving controller, which adopts the above-mentioned simulation modeling method.
与现有技术相比,本发明的有益效果是:Compared with the prior art, the present invention has the following beneficial effects:
1、本申请能够准确地模拟真实世界,实现无人车和环境的自由交互的效果;1. This application can accurately simulate the real world and achieve the effect of free interaction between the unmanned vehicle and the environment;
2、本申请建立了虚拟环境下可以适用于多智能汽车并发实时仿真的高效率传感器功能模型,可以模拟一定的物理现象,能够支持多智能汽车同时参与的复杂测试场景实时仿真;2. This application establishes a high-efficiency sensor function model that can be applied to concurrent real-time simulation of multiple intelligent vehicles in a virtual environment, can simulate certain physical phenomena, and can support real-time simulation of complex test scenarios in which multiple intelligent vehicles participate simultaneously;
3、本申请根据物理机制对毫米波雷达传感器探测性能进行仿真,通过对毫米波雷达传感器模型和真实毫米波雷达测量数据的拟合,采用统计方法对距离误差、速度误差和角度误差进行对比分析,验证了毫米波雷达传感器模型的准确性。3. This application simulates the detection performance of the millimeter-wave radar sensor based on the physical mechanism. By fitting the millimeter-wave radar sensor model and the actual millimeter-wave radar measurement data, a statistical method is used to compare and analyze the distance error, speed error and angle error, thereby verifying the accuracy of the millimeter-wave radar sensor model.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为本发明中智能驾驶控制器的传感器仿真建模方法的流程图。FIG1 is a flow chart of a sensor simulation modeling method of an intelligent driving controller in the present invention.
具体实施方式DETAILED DESCRIPTION
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
请参阅图1,本发明提供一种技术方案:一种智能驾驶控制器的传感器仿真建模方法,包括如下步骤:Please refer to FIG1 . The present invention provides a technical solution: a sensor simulation modeling method for an intelligent driving controller, comprising the following steps:
S1、首先从采集的数据中,在数字世界中重建自动驾驶场景,包括汽车、行人、道路、建筑和交通标志,控制重建的场景进行仿真,生成一些罕见的关键场景;S1. First, reconstruct the autonomous driving scene in the digital world from the collected data, including cars, pedestrians, roads, buildings and traffic signs, control the reconstructed scene for simulation, and generate some rare key scenes;
其中,传感器功能模型框架的功能模块由对象提取、遮挡模拟、输出模拟和误差模拟组成,对象提取按照传感器的位置、感知范围、感知对象类型从仿真场景中快速提取出被感知对象;Among them, the functional modules of the sensor function model framework are composed of object extraction, occlusion simulation, output simulation and error simulation. Object extraction quickly extracts the perceived object from the simulation scene according to the sensor's position, perception range and perceived object type;
遮挡模拟,根据对象的几何轮廓和它们相对传感器的位置关系,按照图形几何学确定出最终的可见对象;Occlusion simulation, based on the geometric outlines of objects and their relative position to the sensor, determines the final visible objects according to the graphic geometry;
输出模拟,根据传感器自身特性、被感知对象的输出格式,为需要对外输出的可见对象计算理想的输出数据,得到理想待输出对象;Output simulation, based on the characteristics of the sensor itself and the output format of the perceived object, calculates the ideal output data for the visible object that needs to be output externally, and obtains the ideal output object;
误差模拟,表征传感器特性,在理想待输出对象的输出数据上加入高斯白噪声,形成最终的感知输出;Error simulation, characterizing the sensor characteristics, adding Gaussian white noise to the output data of the ideal output object to form the final perception output;
S2、虚拟环境下理想环境传感器的仿真建模,包括静态背景、动态物体和区域外物体;S2, simulation modeling of ideal environmental sensors in virtual environments, including static background, dynamic objects and objects outside the area;
其中,所述静态背景包括建筑、道路和交通标志;所述动态物体包括行人和汽车;所述区域外物体包括天空和道路;The static background includes buildings, roads and traffic signs; the dynamic objects include pedestrians and cars; the objects outside the area include the sky and roads;
动态物体建模时,使用网络生成相应的物特征,共享形状信息,从而生成完整的汽车形状;When modeling dynamic objects, the network is used to generate corresponding object features and share shape information to generate the complete car shape;
S3、虚拟环境下物理感知传感器的仿真建模,包括摄像头仿真、激光雷达仿真、毫米波雷达仿真;S3, simulation modeling of physical perception sensors in virtual environments, including camera simulation, lidar simulation, and millimeter-wave radar simulation;
其中,摄像头仿真基于环境物体的几何空间生成逼真的图像,再根据物体的真实材质与纹理,通过计算机图形学对三维模型添加颜色和光学属性等,来仿真模拟图像合成;激光雷达仿真参照真实激光雷达的扫描方式,模拟出激光雷达发射出和接收到的每一条射线,并且还要对发射出的射线与场景中所有物体求交;毫米波雷达仿真根据测试车辆所配置雷达的视场角和分辨率信息,向不同方向发射一系列虚拟连续调频毫米波,并接收目标的反射信号;The camera simulation generates realistic images based on the geometric space of environmental objects, and then adds color and optical properties to the three-dimensional model through computer graphics according to the real material and texture of the object to simulate image synthesis; the laser radar simulation refers to the scanning method of the real laser radar, simulates each ray emitted and received by the laser radar, and also intersects the emitted rays with all objects in the scene; the millimeter wave radar simulation emits a series of virtual continuous frequency modulated millimeter waves in different directions according to the field of view and resolution information of the radar configured on the test vehicle, and receives the reflected signal of the target;
S4、虚拟环境下混合传感器仿真建模,将目标列表/环境参数作为输入,目标列表/传感器原始数据作为输出;S4, hybrid sensor simulation modeling in virtual environment, taking target list/environmental parameters as input and target list/sensor raw data as output;
其中,混合传感器在理想传感器的基础上考虑噪声和目标物属性等因素,在理想待输出对象的数据信息中增加误差信号;对传感器工作链路进行部分物理级建模,输出传感器原始数据;Among them, the hybrid sensor considers factors such as noise and target properties on the basis of the ideal sensor, adds error signals to the data information of the ideal output object; performs partial physical level modeling on the sensor working link, and outputs the sensor raw data;
S5、传感器功能模型的仿真验证,根据物理机制对毫米波雷达传感器探测性能进行仿真,通过对毫米波雷达传感器模型和真实毫米波雷达测量数据的拟合,采用统计方法对距离误差、速度误差和角度误差进行对比分析,验证了毫米波雷达传感器模型的准确性。S5. Simulation verification of the sensor function model. The detection performance of the millimeter-wave radar sensor is simulated according to the physical mechanism. By fitting the millimeter-wave radar sensor model and the actual millimeter-wave radar measurement data, a statistical method is used to compare and analyze the distance error, speed error and angle error, thus verifying the accuracy of the millimeter-wave radar sensor model.
本实施例中一种智能驾驶控制器的传感器仿真建模方法能够准确地模拟真实世界,实现无人车和环境的自由交互的效果;建立了虚拟环境下可以适用于多智能汽车并发实时仿真的高效率传感器功能模型,可以模拟一定的物理现象,能够支持多智能汽车同时参与的复杂测试场景实时仿真;能够根据物理机制对毫米波雷达传感器探测性能进行仿真,通过对毫米波雷达传感器模型和真实毫米波雷达测量数据的拟合,采用统计方法对距离误差、速度误差和角度误差进行对比分析,验证了毫米波雷达传感器模型的准确性。In the present embodiment, a sensor simulation modeling method of an intelligent driving controller can accurately simulate the real world and achieve the effect of free interaction between the unmanned vehicle and the environment; a high-efficiency sensor function model suitable for concurrent real-time simulation of multiple intelligent vehicles in a virtual environment is established, which can simulate certain physical phenomena and support real-time simulation of complex test scenarios in which multiple intelligent vehicles participate simultaneously; the detection performance of the millimeter-wave radar sensor can be simulated according to the physical mechanism, and the accuracy of the millimeter-wave radar sensor model is verified by fitting the millimeter-wave radar sensor model and the actual millimeter-wave radar measurement data and using statistical methods to compare and analyze the distance error, speed error and angle error.
本实施例还公开了一种装置智能驾驶控制器的传感器仿真建模装置,采用上述仿真建模方法。This embodiment also discloses a sensor simulation modeling device for an intelligent driving controller, which adopts the above-mentioned simulation modeling method.
以上所述仅是本发明的优选实施方式,本发明的保护范围并不仅局限于上述实施例,凡属于本发明思路下的技术方案均属于本发明的保护范围。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理前提下的若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments. All technical solutions under the concept of the present invention belong to the protection scope of the present invention. It should be pointed out that for ordinary technicians in this technical field, some improvements and modifications without departing from the principle of the present invention should also be regarded as the protection scope of the present invention.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410961512.8A CN118627201A (en) | 2024-07-18 | 2024-07-18 | A sensor simulation modeling method and device for intelligent driving controller |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410961512.8A CN118627201A (en) | 2024-07-18 | 2024-07-18 | A sensor simulation modeling method and device for intelligent driving controller |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118627201A true CN118627201A (en) | 2024-09-10 |
Family
ID=92612387
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410961512.8A Pending CN118627201A (en) | 2024-07-18 | 2024-07-18 | A sensor simulation modeling method and device for intelligent driving controller |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118627201A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119203555A (en) * | 2024-09-22 | 2024-12-27 | 苏州智行众维智能科技有限公司 | A 4D millimeter wave radar sensor simulation method for autonomous driving simulation testing |
| CN119647117A (en) * | 2024-12-02 | 2025-03-18 | 深圳清华大学研究院 | A method for storing and reading radar reflectivity of 3D model for autonomous driving simulation |
-
2024
- 2024-07-18 CN CN202410961512.8A patent/CN118627201A/en active Pending
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119203555A (en) * | 2024-09-22 | 2024-12-27 | 苏州智行众维智能科技有限公司 | A 4D millimeter wave radar sensor simulation method for autonomous driving simulation testing |
| CN119647117A (en) * | 2024-12-02 | 2025-03-18 | 深圳清华大学研究院 | A method for storing and reading radar reflectivity of 3D model for autonomous driving simulation |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20190236380A1 (en) | Image generation system, program and method, and simulation system, program and method | |
| US12020476B2 (en) | Data synthesis for autonomous control systems | |
| CN112526893B (en) | A test system for smart cars | |
| US20210406562A1 (en) | Autonomous drive emulation methods and devices | |
| CN118627201A (en) | A sensor simulation modeling method and device for intelligent driving controller | |
| Vacek et al. | Learning to predict lidar intensities | |
| CN109884916A (en) | A kind of automatic Pilot Simulation Evaluation method and device | |
| WO2018066352A1 (en) | Image generation system, program and method, and simulation system, program and method | |
| CN116449807B (en) | Simulation test method and system for automobile control system of Internet of things | |
| CN112668603A (en) | Method and device for generating training data for a recognition model for recognizing objects in sensor data, training method and control method | |
| CN115469564A (en) | Vehicle automatic parking test system, method, vehicle and storage medium | |
| AU2024200044B2 (en) | Point Cloud Data Generation Method, Apparatus, And Device, And Storage Medium | |
| CN117269940B (en) | Point cloud data generation method, lidar sensing capability verification method | |
| Lee et al. | Virtual lidar sensor intensity data modeling for autonomous driving simulators | |
| CN114280562A (en) | Radar simulation test method and computer readable storage medium implementing the method | |
| Elmadawi et al. | End-to-end sensor modeling for LiDAR Point Cloud | |
| Weiguo et al. | An augmented reality-based proving ground vehicle-in-the-loop test platform | |
| Yang et al. | Methods for improving point cloud authenticity in LiDAR simulation for autonomous driving: a review | |
| CN111897241A (en) | Sensor fusion multi-target simulation hardware-in-loop simulation system | |
| CN119323141A (en) | Foggy scene generation method and system for automatic driving simulation test | |
| Sural et al. | CoSim: a co-simulation framework for testing autonomous vehicles in adverse operating conditions | |
| CN112560258B (en) | Test method, device, equipment and storage medium | |
| US20250237741A1 (en) | Information processing device, information processing method, and program | |
| Kohlisch et al. | LiDAR-Based Augmented Reality for the Development of Test Scenarios on Safety for Autonomous Operation of a Shunting Locomotive | |
| KR20250100930A (en) | Real-time simulation method for LiDAR sensor in a virtual road environment of autonomous driving simulation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |