CN113748314B - Interactive three-dimensional point cloud matching - Google Patents
Interactive three-dimensional point cloud matching Download PDFInfo
- Publication number
- CN113748314B CN113748314B CN201880100676.4A CN201880100676A CN113748314B CN 113748314 B CN113748314 B CN 113748314B CN 201880100676 A CN201880100676 A CN 201880100676A CN 113748314 B CN113748314 B CN 113748314B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- user
- user interface
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/10—Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/28—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/85—Arrangements for transferring vehicle- or driver-related data
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/16—Type of output information
- B60K2360/166—Navigation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/592—Data transfer involving external databases
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4808—Evaluating distance, position or velocity data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Remote Sensing (AREA)
- General Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Software Systems (AREA)
- Chemical & Material Sciences (AREA)
- Mechanical Engineering (AREA)
- Transportation (AREA)
- Combustion & Propulsion (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Electromagnetism (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
Abstract
Systems and methods are disclosed that relate to generating an interactive user interface that enables a user to move, rotate, or otherwise edit three-dimensional point cloud data in a virtual three-dimensional space to align or match point clouds captured by light detection and ranging scans prior to generating a high resolution map. The system may obtain point cloud data for two or more point clouds, render the point clouds for display in a user interface, and then receive a user selection of one of the point clouds and a command from the user to move and/or rotate the selected point cloud. The system may adjust the display position of the selected point cloud relative to other concurrently displayed point clouds in real-time in response to a user command and store the adjusted point cloud position data for use in generating a new high resolution map.
Description
Incorporation of priority applications by reference
Any and all applications, if any, claiming foreign or domestic priority in the application data sheet of the present application are incorporated herein by reference in their entirety in accordance with 37CFR1.57.
Copyright statement
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the U.S. patent and trademark office file and/or records, but otherwise reserves all copyright rights whatsoever.
Background
Vehicles, such as vehicles for shared travel purposes, vehicles providing driver assistance functions, and/or automated or autonomous driving vehicles (AV), may acquire and process sensor data using an onboard data processing system to perform a wide variety of functions. For example, the functions may include determining and/or displaying navigation routes, identifying road signs, detecting objects and/or road obstacles, controlling vehicle operation, and the like. Providing an accurate and precise high-resolution map for an autonomous vehicle is one of the most basic and important prerequisites to be able to achieve fully autonomous driving. For safety reasons, the map that an autonomous car needs to access contains far more detailed information and true-ground-absolute accuracy than seen in typical existing map resources that are not designed for autonomous purposes.
Drawings
FIG. 1A illustrates a block diagram of a networked vehicle environment in which one or more vehicles and/or one or more user devices interact with a server, according to one embodiment.
FIG. 1B illustrates a block diagram showing the vehicle of FIG. 1A communicating with one or more other vehicles and/or servers of FIG. 1A, according to one embodiment.
FIG. 2 illustrates a block diagram showing the server of FIGS. 1A and 1B in communication with a map editor device, in accordance with one embodiment.
FIG. 3 is an illustrative user interface including a reduced view of a three-dimensional point cloud rendering and a two-dimensional map projection including graphical indicators representing different light detection and ranging scan areas.
FIG. 4 is an illustrative user interface including an enlarged view of a three-dimensional point cloud rendering and two-dimensional map projection, including overlaid graphical indicators of nodes and connections within a pose graph associated with point cloud data.
FIG. 5 is an illustrative user interface including three-dimensional point cloud rendering and two-dimensional map projection, with two user-selected nodes removed.
FIG. 6 is an illustrative user interface including a reduced view of three-dimensional point cloud rendering and two-dimensional map projection, in which displayed pose graphical data has been altered based on user interaction with the user interface.
FIG. 7 is a flow chart of an illustrative method for providing a user interface function that enables a user to view and edit point clouds and pose graphical data for generating a high resolution map.
FIG. 8 is an illustrative user interface including an enlarged view of a three-dimensional point cloud rendering and two-dimensional map projection, including a display of distance measurements between user-selected points.
FIG. 9 is an illustrative user interface that includes three-dimensional point cloud rendering of two point clouds and that enables a user to visually realign or match points in the respective point clouds.
FIG. 10 is a flow chart of an illustrative method for enabling a user to visually edit the positioning of one or more point clouds for generating a high resolution map.
Detailed Description
Constructing a large high resolution map (high definition map, HD map), such as a high resolution map of an entire city, is a relatively new technical field. One of the challenges is that a large amount of captured data must be processed and surveyed (typically programmatically) through a multi-part map building pipeline. Despite the final output of the three-dimensional dense point cloud and the two-dimensional map image, there are also intermediate results in typical high resolution map construction processes, such as light detection and ranging (LIDAR) scanning and corresponding pose graphics. Existing methods of constructing high resolution maps often lack efficient tools for enabling a user of a computing system to visually survey data in a particular region of captured light detection and ranging scan data, visualize intermediate results and final results, and interactively alter the intermediate data through a graphical user interface to improve the quality of the final high resolution map data. Aspects of the present disclosure include a variety of user interface tools and associated computer functionality that enable integrated visual exploration and editing for two-dimensional and three-dimensional visualization of captured light detection and ranging data, pose graphics, map data, in order to construct a more accurate high resolution map.
Detailed descriptions and examples of systems and methods according to one or more illustrative embodiments of the present disclosure may be found in the section entitledImproved high resolution map generation features and associated interfacesIs entitledExample embodimentAnd figures 2-10 of the drawings herein. Further, the components and functionality of the interactive user interface and associated high-resolution map generation features may be configured and/or incorporated into the networked vehicle environment 100 described herein in fig. 1A and 1B.
The various embodiments described herein are closely related to, can be implemented by, and are dependent upon, vehicle and/or computer technology. For example, as described herein with reference to various embodiments, generating interactive graphical user interfaces for displaying and implementing associated computer functions for manipulating data points of potentially millions of points in a three-dimensional virtual space cannot be reasonably performed by a human alone without implementing the vehicle and/or computer technology upon which such interactive user interfaces are based.
■Networked vehicle environment
FIG. 1A illustrates a block diagram of a networked vehicle environment 100 in which one or more vehicles 120 and/or one or more user devices 102 interact with a server 130 via a network 110, according to one embodiment. For example, the vehicle 120 may be equipped to provide shared travel and/or other location-based services to assist the driver in controlling vehicle operation (e.g., via a variety of driver assistance features such as adaptive and/or conventional cruise control, adaptive front light control, antilock braking, automatic parking, night vision, blind spot monitoring, collision avoidance, crosswind stabilization, driver fatigue detection, driver monitoring systems, emergency driver assistance, intersection assistance, steep slope descent control, intelligent speed adaptation, lane centering, lane departure warning, front, rear and/or side parking sensors, pedestrian detection, rain sensors, look-around systems, tire pressure monitors, traffic sign recognition, steering assistance, reverse driving warning, traffic condition cues, etc.) and/or to fully control vehicle operation. Thus, the vehicle 120 may be a conventional gasoline, natural gas, biofuel, electric, hydrogen, etc., vehicle configured to provide shared travel and/or other location-based services, provide driver assistance functionality (e.g., one or more of the driver assistance features described herein), or an automated or autonomous driving vehicle (AV). The vehicle 120 may be a car, truck, minibus, bus, motorcycle, scooter, bicycle, and/or any other motor vehicle.
The server 130 may communicate with the vehicle 120 to obtain vehicle data, such as route data, sensor data, awareness data, vehicle 120 control data, vehicle 120 component failure and/or failure data, and the like. Server 130 may process and store these vehicle data for use in other operations performed by server 130 and/or another computing system (not shown). Such operations may include running a diagnostic model for identifying a problem with the operation of the vehicle 120 (e.g., cause of a navigation error of the vehicle 120, abnormal sensor readings, unidentified objects, malfunction of a component of the vehicle 120, etc.); running a model for simulating the performance of the vehicle 120 given a set of variables; identifying objects that are not identifiable by the vehicle 120, generating control instructions that, when executed by the vehicle 120, cause the vehicle 120 to drive and/or maneuver in some manner along a specified path; and/or the like.
The server 130 may also transmit data to the vehicle 120. For example, the server 130 may transmit map data, firmware and/or software updates, vehicle 120 control instructions, identification of objects that have not been otherwise identified by the vehicle 120, passenger access information, traffic data, and/or the like.
In addition to communicating with one or more vehicles 120, the server 130 can also communicate with one or more user devices 102. In particular, the server 130 may provide web services to enable users to request location-based services (e.g., shipping services, such as shared travel services) through applications running on the user device 102. For example, the user device 102 may correspond to a computing device, such as a smart phone, tablet, notebook, smart watch, or any other device, that communicates with the server 130 over the network 110. In this embodiment, the user device 102 executes an application, such as a mobile application, which may be used by a user operating the user device 102 to interact with the server 130. For example, the user device 102 may communicate with the server 130 to provide location data and/or queries to the server 130, receive map-related data and/or directions from the server 130, and/or the like.
Server 130 may process the request and/or other data received from user device 102 to identify a service provider (e.g., a driver of vehicle 120) to provide the requested service to the user. Further, the server 130 may receive data, such as user travel access or destination data, user location query data, etc., based on which the server 130 identifies areas, addresses, and/or other locations associated with various users. The server 130 may then use the identified location to provide a direction to the service provider and/or user pointing to the determined access location.
The application running on the user device 102 may be created and/or made available by the same entity responsible for the server 130. Alternatively, the application running on the user device 102 may be a third party application (e.g., an application programming interface or a software development kit) that contains features that enable communication with the server 130.
For simplicity and ease of explanation, a single server 130 is illustrated in FIG. 1A. However, it should be understood that server 130 may be a single computing device or may include multiple different computing devices that are logically or physically combined together to collectively operate as a server system. The components of server 130 may be implemented in dedicated hardware (e.g., a server computing device having one or more ASICs) so that no software is required, or as a combination of hardware and software. Further, the modules and components of server 130 may be combined on one server computing device or disposed separately or in groups on multiple server computing devices. In some embodiments, server 130 may include more or fewer components than shown in FIG. 1A.
Network 110 includes any wired network, wireless network, or combination thereof. For example, the network 110 may be a personal area network, a local area network, a wide area network, an over-the-air network (e.g., for broadcast or television), a cable network, a satellite network, a cellular telephone network, or a combination thereof. As yet another example, the network 110 may be a publicly accessible network linking networks, possibly operated by a variety of different institutions, such as the internet. In some embodiments, network 110 may be a private or semi-private network, such as a corporate or university intranet. The network 110 may include one or more wireless networks, such as a global system for mobile communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or any other type of wireless network. The network 110 may use protocols and components to communicate over the internet or any other network of the type described above. For example, protocols used by network 110 may include hypertext transfer protocol (HTTP), hypertext transfer security protocol (HTTPs), message Queue Telemetry Transport (MQTT), constrained application protocol (CoAP), and the like. Protocols and components for communicating via the internet or any other communication network of the type described above are well known to those skilled in the art and will therefore not be described in further detail herein.
The server 130 may include a navigation unit 140, a vehicle data processing unit 145, and a data store 150. The navigation unit 140 may assist in location-based services. For example, the navigation unit 140 may assist a user (also referred to herein as a "driver") in transporting another user (also referred to herein as a "rider") and/or an object (e.g., food, parcel, etc.) from a first location (also referred to herein as a "pickup location") to a second location (also referred to herein as a "destination location"). The navigation unit 140 may assist in achieving user and/or object transport by providing maps and/or navigation instructions to applications running on the user device 102 of the lift, to applications running on the user device 102 of the driver, and/or to navigation systems running on the vehicle 120.
As an example, the navigation unit 140 may include a matching service (not shown) that pairs a lift, who requests a journey from a pick-up location to a destination location, with a driver who can complete the journey. The matching service may interact with an application running on the user device 102 of the lift and/or an application running on the user device 102 of the driver to establish the lift's journey and/or to process payments from the lift to the driver.
The navigation unit 140 may also communicate with an application running on the driver's user device 102 during the journey to obtain journey location information from the user device 102 (e.g., via a Global Positioning System (GPS) component coupled to and/or embedded in the user device 102), and provide a navigation direction to the application that aids the driver in driving from the current location to the destination location. The navigation unit 140 may also indicate a plurality of different geographic locations or points of interest to the driver, whether or not the driver is on a lift.
The vehicle data processing unit 145 may be configured to support driver assistance features of the vehicle 120 and/or to support autonomous driving. For example, the vehicle data processing unit 145 may generate map data and/or transmit it to the vehicle 120, run a diagnostic model for identifying operational problems of the vehicle 120, run a model for simulating performance of the vehicle 120 given a set of variables, identify objects using the vehicle data provided by the vehicle 120 and transmit an identification of the objects to the vehicle 120, generate vehicle 120 control instructions and/or transmit it to the vehicle 120, and/or type operation.
The data storage 150 may store various types of data used by the navigation unit 140, the vehicle data processing unit 145, the user device 102, and/or the vehicle 120. For example, the data store 150 may store user data 152, map data 154, search data 156, and log data 158.
The user data 152 may include information regarding some or all users focused on the location-based service, such as drivers and lift-ers. The information may include, for example, a user name, password, name, address, billing information, data associated with previous trips taken by or obtained by the user, user rating information, user loyalty program information, and/or the like.
Map data 154 may include high resolution maps generated from sensors (e.g., light detection and ranging (LiDAR) sensors, radio detection and ranging (RADAR) sensors, infrared cameras, visible light cameras, stereo cameras, inertial Measurement Units (IMUs), etc.), satellite images, optical Character Recognition (OCR) (e.g., identifying street names, identifying street sign words, identifying point of interest names, etc.) performed on captured street images, etc.; information for calculating a route; information for rendering a two-dimensional and/or three-dimensional graphical map; and/or the like. For example, the map data 154 may include elements: such as street and intersection layouts, bridges (e.g., including information about the height and/or width of an overpass), exit ramps, buildings, parking structure entrances and exits (e.g., including information about the height and/or width of a vehicle entrance and/or exit), locations of guideboards and stop lights, emergency junctions, points of interest (e.g., parks, restaurants, gas stations, attractions, landmarks, etc., and associated names), road markings (e.g., centerline markings separating opposite lanes, lane markings, parking lines, left turn guidance lines, right turn guidance lines, crosswalk, bus lane markings, bike lane markings, safety island markings, pavement words, highway exits and entrance markings, etc.), curbs, railway lines, waterways, turn radii and/or angles of left and right turns, distances and dimensions of road features, locations of spacers between bi-directional traffic, and/or the like elements, along with geographic locations (e.g., geographic coordinates) associated with these elements. The map data 154 may also include reference data such as real-time and/or historical traffic information, current and/or predicted weather conditions, road work information, information about laws and regulations (e.g., speed limits, whether right turns are allowed or forbidden at red light, whether turning around is allowed or forbidden, allowed directions of travel, and/or the like), news events, and/or the like.
Although the map data 154 is shown as being stored in the data store 150 of the server 130, this is not meant to be limiting. For example, the server 130 may transmit the map data 154 to the vehicle 120 for storage therein (e.g., in the data store 129, as described below).
Search data 156 may include searches that were entered by a number of different users. For example, the search data 156 may include a text search for access and/or destination locations. The search may be for a particular address, geographic location, name associated with a geographic location (e.g., name of a park, restaurant, gas station, attraction, landmark, etc.), and so forth.
The log data 158 may include vehicle data provided by one or more vehicles 120. For example, the vehicle data may include route data, sensor data, awareness data, vehicle 120 control data, vehicle 120 component failure and/or failure data, and the like.
FIG. 1B illustrates a block diagram showing the vehicle 120 of FIG. 1A communicating with one or more other vehicles 170A-N and/or the server 130 of FIG. 1A, according to one embodiment. As shown in fig. 1B, the vehicle 120 may include various components and/or data stores. For example, vehicle 120 may include a sensor array 121, a communication array 122, a data processing system 123, a communication system 124, an internal interface system 125, a vehicle control system 126, an operating system 127, a mapping engine 128, and/or a data store 129.
Communications 180 may be sent and/or received between vehicle 120, one or more vehicles 170A-N, and/or server 130. The server 130 may transmit and/or receive data from the vehicle 120, as described above in connection with fig. 1A. For example, the server 130 may transmit vehicle control instructions or commands to the vehicle 120 (e.g., as the communication 180). The vehicle control instructions may be received by a communication array 122 (e.g., an array of one or more antennas configured to transmit and/or receive wireless signals) that is operated by a communication system 124 (e.g., a transceiver). The communication system 124 may communicate vehicle control instructions to a vehicle control system 126 that may operate acceleration, steering, braking, lights, signals, and other operating systems 127 of the vehicle 120 to drive and/or maneuver the vehicle 120 and/or assist a driver in driving and/or maneuvering the vehicle 120 through road traffic to a destination location specified by the vehicle control instructions.
As an example, the vehicle control instructions may include route data 163 that may be processed by the vehicle control system 126 to steer the vehicle 120 and/or assist a driver in steering the vehicle 120 along a given route (e.g., an optimized route calculated by the server 130 and/or the mapping engine 128) toward a specified destination location. In processing the route data 163, the vehicle control system 126 may generate control commands 164 for execution by the operating system 127 (e.g., acceleration, steering, braking, maneuvering, reversing, etc.) to cause the vehicle 120 to travel along the route to the destination location and/or to assist a driver in maneuvering the vehicle 120 along the route to the destination location.
The destination location 166 may be specified by the server 130 based on user requests (e.g., access requests, delivery requests, etc.) communicated by applications running on the user device 102. Alternatively or additionally, a lift user and/or driver of the vehicle 120 may provide the destination location 166 by providing user input 169 via the internal interface system 125 (e.g., a vehicle navigation system). In some embodiments, the vehicle control system 126 may communicate the input destination location 166 and/or the current location of the vehicle 120 (e.g., as a GPS data packet) as a communication 180 to the server 130 via the communication system 124 and the communication array 122. The server 130 (e.g., the navigation unit 140) may perform an optimization operation using the current location of the vehicle 120 and/or the input destination location 166 to determine an optimal route for the vehicle 120 to travel to the destination location 166. Route data 163, including the optimal route, may be communicated from server 130 to vehicle control system 126 via communication array 122 and communication system 124. As a result of receiving the route data 163, the vehicle control system 126 can enable the operating system 127 to steer the vehicle 120 along the optimal route to the destination location 166 via traffic, assist a driver in steering the vehicle 120 along the optimal route to the destination location 166 via traffic, and/or enable the internal interface system 125 to display and/or present instructions for steering the vehicle 120 along the optimal route to the destination location 166 via traffic.
Alternatively or additionally, the route data 163 includes an optimal route, and the vehicle control system 126 automatically inputs the route data 163 into the mapping engine 128. The mapping engine 128 may generate map data 165 using the optimal route (e.g., generate a map that displays the optimal route and/or take instructions for the optimal route) and provide the map data 165 to the internal interface system 125 (e.g., via the vehicle control system 126) for display. The map data 165 may include information derived from map data 154 stored in the data store 150 on the server 130. The displayed map data 165 may indicate the estimated time of arrival and/or display the travel progress of the vehicle 120 along the optimal route. The displayed map data 165 may also include indicators such as diversion commands, emergency notification, road work information, real-time traffic data, current weather conditions, information about laws and regulations (e.g., speed limits, whether right turns are allowed or forbidden at red lights, where to allow or forbidden turns around, allowed directions of travel, etc.), news events, and/or the like.
The user input 169 may also be a request to access a network (e.g., network 110). In response to such requests, the internal interface system 125 may generate access requests 168, which may be processed by the communication system 124 to configure the communication array 122 to send and/or receive data corresponding to user interactions with the internal interface system 125 and/or user device 102 interactions with the internal interface system 125 (e.g., user devices 102 connected to the internal interface system 125 through a wireless connection). For example, the vehicle 120 may include an onboard Wi-Fi that passengers and/or drivers may access to send and/or receive email and/or text messages, audio streams and/or video content, browse content pages (e.g., web pages, website pages, etc.), and/or access applications using web access. Based on the user interactions, internal interface system 125 may receive content 167 via network 110, communication array 122, and/or communication system 124. Communication system 124 may dynamically manage network access to avoid or minimize disruption of transmission of content 167.
The sensor array 121 may include any number of one or more types of sensors, such as satellite radio navigation systems (e.g., GPS), light detection and ranging sensors, landscape sensors (e.g., radio detection and ranging sensors), inertial measurement units, cameras (e.g., infrared cameras, visible light cameras, stereo cameras, etc.), wi-Fi detection systems, cellular communication systems, inter-vehicle communication systems, road sensor communication systems, feature sensors, proximity sensors (e.g., infrared, electromagnetic, photoelectric, etc.), distance sensors, depth sensors, and/or the like. The satellite radio navigation system may calculate the current location of the vehicle 120 (e.g., in the range of 1-10 meters) based on an analysis of the signals received from the satellite constellation.
Light detection and ranging sensors, radio detection and ranging, and/or any other similar type of sensor may be used to detect the surrounding environment of the vehicle 120 when the vehicle 120 is in motion or is about to begin motion. For example, light detection and ranging sensors may be used to reflect multiple laser beams from approaching objects to assess their distance and provide accurate three-dimensional information about the surrounding environment. The data obtained from the light detection and ranging sensors may be used to perform object recognition, motion vector determination, collision prediction, and/or implement accident avoidance processes. Alternatively, the light detection and ranging sensor may use a rotating scanning mirror assembly to provide a 360 ° viewing angle. Light detection and ranging sensors may optionally be mounted on the roof of the vehicle 120.
The inertial measurement unit may include a X, Y, Z oriented gyroscope and/or accelerometer. The inertial measurement unit provides data regarding rotational and linear motion of the vehicle 120, which can be used to calculate the motion and position of the vehicle 120.
The camera may be used to capture visual images of the environment surrounding the vehicle 120. Depending on the configuration and number of cameras in particular, the cameras may provide a 360 ° view around the vehicle 120. The image from the camera may be used to read road markings (e.g., lane markings), read street signs, detect objects, and/or the like.
Wi-Fi detection systems and/or cellular communication systems may be used to triangulate Wi-Fi hotspots or cellular towers, respectively, to determine the location of the vehicle 120 (optionally in combination with satellite radio navigation systems).
The inter-vehicle communication system (which may include a Wi-Fi detection system, a cellular communication system, and/or the communication array 122) may be used to receive and/or transmit data to other vehicles 170A-N, such as current speed and/or position coordinates of the vehicle 120, time and/or position coordinates corresponding to a planned deceleration rate and a planned deceleration rate, time and/or position coordinates when stopping operation is planned, time and/or position coordinates when lane changing and lane changing direction is planned, time and/or position coordinates when turning operation is planned, time and/or position coordinates when parking operation is planned, and/or the like.
A road sensor communication system (which may include a Wi-Fi detection system and/or a cellular communication system) may be used to read information from road sensors (e.g., indicating traffic flow speed and/or traffic congestion) and/or from traffic control devices (e.g., traffic lights).
When a user requests a pickup (e.g., through an application running on the user device 102), the user may specify a particular destination location. The initial position may be a current position of the vehicle 120, which may be determined using satellite radio navigation systems (e.g., GPS, galileo, COMPASS, DORIS, GLONASS and/or other satellite radio navigation systems) installed in the vehicle, wi-Fi positioning systems, cellular tower triangulation, and/or the like. Alternatively, the initial location may be specified by a user through a user interface provided by the vehicle 120 (e.g., the internal interface system 125) or through the user device 102 running the application. Alternatively, the initial position may be automatically determined based on position information obtained from the user device 102. In addition to the initial location and the destination location, one or more waypoints may be specified, enabling multiple destination locations.
Raw sensor data 161 from sensor array 121 may be processed by in-vehicle data processing system 123. The processed data 162 may then be transmitted by the data processing system 123 to the vehicle control system 126 and optionally to the server 130 via the communication system 124 and the communication array 122.
The data store 129 can store map data (e.g., map data 154) and/or subsets of map data 154 (e.g., a portion of map data 154 corresponding to a general area in which the vehicle 120 is currently located). In some embodiments, the vehicle 120 may record updated map data along the travel route using the sensor array 121 and transmit the updated map data to the server 130 via the communication system 124 and the communication array 122. The server 130 may then transmit the updated map data to one or more of the vehicles 170A-N and/or further process the updated map data.
The data processing system 123 may provide continuous or near continuous processing of the data 162 to the vehicle control system 126 in response to point-to-point activity in the environment surrounding the vehicle 120. The processed data 162 may include a comparison between raw sensor data 161, representing the operating environment of the vehicle 120 and continuously collected by the sensor array 121, and map data stored in the data store 129. In one example, the data processing system 123 is programmed with machine learning or other artificial intelligence capabilities to enable the vehicle 120 to identify and respond to conditions, events, and/or potential hazards. In a variation, the data processing system 123 may continuously or near continuously compare the raw sensor data 161 with stored map data to perform positioning to continuously or near continuously determine the position and/or orientation of the vehicle 120. The positioning of the vehicle 120 may enable the vehicle 120 to learn the instantaneous position and/or direction of the vehicle 120 as compared to stored map data to maneuver the vehicle 120 through traffic on a neighborhood road, and/or to assist a driver in maneuver the vehicle 120 through traffic on a neighborhood road and identify and respond to potential hazards (e.g., pedestrians) or local conditions, such as weather or traffic conditions.
Furthermore, positioning may enable the vehicle 120 to tune or beam steer the communication array 122 to maximize the quality of the communication link and/or minimize interference with other communications from other vehicles 170A-N. For example, communication system 124 may steer the beam of radiation patterns of communication array 122 in response to network configuration commands received from server 130. The data store 129 may store current network source map data identifying network base stations and/or other network sources providing network connectivity. The network source map data may indicate locations of base stations and/or available network types (e.g., 3G, 4G, LTE, wi-Fi, etc.) within the area where the vehicle 120 is located.
Although fig. 1B depicts certain operations as being performed by vehicle 120 or server 130, this is not meant to be limiting. The operations performed by the vehicle 120 and the server 130 as described herein may be performed by any entity. For example, certain operations typically performed by the server 130 (e.g., transmitting updated map data to the vehicles 170A-N) may be performed by the vehicle 120 in order to achieve load balancing purposes (e.g., reducing the processing load of the server 130, utilizing idle processing power on the vehicle 120, etc.).
Further, any of the vehicles 170A-N may include some or all of the components of the vehicle 120 described herein. For example, vehicles 170A-N may include a communication array 122 to communicate with vehicle 120 and/or server 130.
■Improved high resolution map generation features and related interfaces
Certain methods disclosed herein relate to generating an interactive user interface that enables a user to alter three-dimensional point cloud data and/or associated pose graphical data generated from light detection and ranging scans prior to generating a high resolution map. The user may select among two-dimensional map representations with overlapping graphical node indicators to alter the graphical connection, remove nodes, view the corresponding three-dimensional point cloud, and otherwise edit intermediate results from light detection and ranging scans to improve the quality of high resolution maps generated from data manipulated by the user. The enhanced quality high resolution map may be transmitted to one or more vehicles, such as vehicle 120, to assist the driver in navigating, driving and/or maneuvering vehicle 120 and/or for use in navigating, driving and/or maneuvering vehicle 120 in an autonomous manner.
According to some embodiments of the present disclosure, three-dimensional point cloud scans are collected from light detection and ranging sensors located at a roof of a vehicle (e.g., an autonomous vehicle, a vehicle for location-based services, a vehicle providing driver assistance functionality, etc.) while the vehicle is traveling on a road. These light detection and ranging scans from different regions can then be passed to an automated pipeline for data processing, including filtering, combining and matching of the various scans. A high resolution map may then be generated by projection of these point clouds. In addition to the three-dimensional point cloud and the two-dimensional map image, it would be beneficial to have a tool for visualizing the pose pattern and associated light detection and ranging scans so that a supervising user assisting the mapping process can visually determine if there is still an inconsistency or inaccuracy after a plurality of different steps in the auto-mapping pipeline.
Aspects of the present disclosure include, for example, a user interface for viewing a high resolution map at different levels, exploring a three-dimensional point cloud of a portion of the high resolution map, measuring a distance between two points from the map or point cloud, and adjusting portions of the map to better align or match two or more point clouds. The user interfaces and associated functionality described herein may be used to improve the accuracy and efficiency of existing mapping methods.
As will be further described herein, aspects of the present disclosure include three relevant regions: map exploration, map editing, and map evaluation. When exploring a map in a user interface, a user may view a region of interest (ROI) in a two-dimensional map view and select a portion to view a corresponding three-dimensional point cloud in a separate pane or viewing area of the user interface. When evaluating and editing a map in two-dimensional and/or three-dimensional views, a user may interactively make immediate changes to reduce or minimize the unanticipated inaccuracy created by the previously completed automatic mapping process.
The map exploration features described herein include loading one or more map graphics (which may take the form of pose graphics in some embodiments) and presenting a visual representation of nodes and edges in the graphics in a portion of a user interface that presents a two-dimensional map data view. Such views within the user interface enable a user to visually inspect the constructed pose graphic, navigate between portions of the graphic to explore the associated three-dimensional point cloud, and determine whether any editing of the graphic is required based on the visual inspection. The user interface described herein enables a user to move and zoom within a two-dimensional map view or a three-dimensional point cloud view. Graphics may be rendered on graphics indicators of different forms at different zoom levels, depending on the zoom level. For example, at a zoom-out level, different sub-graphics may be extracted as large area rectangles or polygons covering the map, while zooming in may cause the user interface to update to display individual nodes and connections of the same sub-graphics, as described further herein.
The map exploration features described herein also include enabling a user to select one or more graphical nodes to view their point clouds in a three-dimensional rendered view. The point clouds from different nodes may be rendered in different colors in the same view, enabling the user to visually determine the degree of alignment of the adjacent point clouds and identify any inaccuracy. When viewing the point cloud, the user may choose to move, rotate, and/or zoom in three dimensions. The user interface described herein may also enable a user to compare two differently configured graphics in a single two-dimensional map view in order to compare any discrepancies or misalignments. Additionally, the user interface may include a background ruler grid and enable manual or automatic real world distance measurement between two points selected in a two-dimensional map view or a three-dimensional point cloud view.
The map editing features described herein include enabling a user to delete edges from a graph, add edges to a graph, and delete nodes from a graph. These modifications can then influence which point cloud data is used to construct the final high resolution map, and how to combine the point cloud data associated with the different light detection and ranging scans in the high resolution map. Additionally, the user interface features herein may enable a user to adjust the alignment or coordination of two point clouds. For example, if a user identifies an area of non-ideal map quality in the point cloud data due to one or more point clouds being misplaced or inaccurate relative to another point cloud, the user may move the point cloud data to adjust the positioning of adjacent or redundant points from another light detection and ranging scan or capture.
FIG. 2 illustrates a block diagram showing the server of FIGS. 1A and 1B in communication with a map editor device 202 in accordance with one embodiment of a map editing environment 200. The supervising user may use the map editor device 202 to view, edit, and refine the intermediate data at various points in the high resolution map generation process. For example, as described below, a user of the map editor device 202 may access such a user interface: the user interface enables a user to view and edit point cloud data and associated pose graphical data that may be stored in map data store 154 before server 130 generates final high resolution map data for use by one or more vehicles 120. The map editor device 202 may be in communication with the server 130 via a network 204, which may be any of the network types described above in connection with the network 110. Network 204 may be the same network as network 110 or a different network. For example, in one embodiment, network 204 may be a local area network controlled by an operator of server 130.
As shown in fig. 2, the server 130 may include a map editing unit 210, a user interface unit 212, a map rendering unit 214, and map editor data 214, in addition to the components shown in fig. 1A. In the illustrated embodiment, the map editing unit 210 may generally be responsible for effecting changes to the original and intermediate high resolution map-related data both by programming methods and in response to user-initiated requests from the map editor device 202. The user interface unit 212 may be responsible for generating for display (e.g., by the map editor device 202) a variety of user interfaces as will be described herein, such as a user interface for enabling a user of the map editor device 202 to visualize and manipulate point cloud data, pose graphical data, and intermediate and final high resolution map data. The map rendering unit 214 may generate a high resolution map from intermediate results, such as point cloud data and pose graphic data.
The stored map editor data 214 may include, for example, a log of changes made by a user of the map editor device 202 to the point cloud data and/or pose graphical data so that the changes may be rolled back or undone. The map editor data 214 may also include information that is not required to generate the high resolution map itself, but that facilitates visualization and editing by the user, for example. For example, such data may include colors assigned to various graphics for display in a user interface, user preferences regarding keyboard shortcuts for graphics or point cloud manipulation, three-dimensional rendering or two-dimensional projection preferences (e.g., default zoom level, resolution, color scheme, zoom or rotation sensitivity, etc.), portions or regions of a map marked by a user for further review, and/or other data. In some embodiments, the map editor device 202 may be a computing system, such as a desktop or notebook computer or a mobile computing device (e.g., a smart phone or tablet device). The map editor device 202 may include or be in communication with a display device, such as a display monitor, touch screen display, or other well known display device. The map editor device 202 may also include or be in communication with a user input device including, but not limited to, a mouse, a keyboard, a scrolling device, a touch screen display, a motion capture device, and/or a stylus.
In one embodiment, the map editor device 202 may operate or execute an application (e.g., a browser or custom developed application) that receives a user interface generated by the server 130 (e.g., by the user interface unit 212), displays the user interface, and sends a response, instruction, or request back to the server 130 based on a selection made by a user of the map editor device within the user interface. The server 130 may then alter the data based on the user interaction and may send back an updated user interface for display by the map editor device. In another embodiment, the map editor device 202 may include a map editing unit, a user interface unit, and/or a map rendering unit (e.g., such units may be implemented in executable instructions of an application operated by the map editor device 202) such that the map editor device 202 need not communicate with the server 130 or any other system to generate a user interface to view and edit map data. For example, the map editor device 202 may load light detection and ranging data and/or intermediate data (e.g., pre-processed point cloud data and pose graphics) from the server 130 and may not then communicate again with the server 130 until the edited data or final high resolution map data is sent back to the server 130 for storage in the data store 150 and distribution to one or more vehicles 120. In other embodiments, a variety of functions may be implemented by the server 130 or the map editor device 202, depending on, for example, the hardware capabilities and network bandwidth considerations of each system in a given instance.
FIG. 3 is an illustrative user interface 300 including a three-dimensional point cloud rendering 320 and a zoomed-out view of a two-dimensional map projection 310, where the two-dimensional map projection 310 includes graphical indicators 312, 314, and 316 representing areas of different light detection and ranging scans. As described above, each user interface (and associated three-dimensional rendering and/or two-dimensional projection that may be included therein) that will be described in connection with fig. 3-10 may be generated by the server 130 or the map editor device 202, depending on the embodiment, and may be presented for display by the map editor device.
Each region marked by graphical indicators 312, 314, and 316 may represent, for example, hundreds or thousands of individual light detection and ranging scans, the specific number depending on the zoom level of the current view. In one embodiment, a vehicle having one or more light detection and ranging sensors may be configured to capture scans periodically (e.g., every millisecond, every 10 milliseconds, every 100 milliseconds, every second, etc.) while driving through a street represented in the two-dimensional map projection 310. Thus, the point cloud data captured by successive scans may thus partially overlap each other and may be matched and preprocessed by well-known automated methods to create intermediate point cloud results and pose graphics for generating the two-dimensional map projection 310 and the three-dimensional point cloud rendering 320. Such automated processes may include, for example, iterative closest point (Iterative Closest Point, ICP) algorithms employed to minimize differences between neighboring point clouds and to allocate connections between point clouds represented by nodes in a pose graph based on a match score. However, in some cases, these automated processing methods may not be able to create optimal point cloud alignment and/or pose graphical data. The user interfaces described herein, including user interface 300, may enable a user to visually identify potential inconsistencies, errors, misalignments, contained low quality, redundant data, and/or other problems that remain after automatic processing of light detection and ranging data.
Although the graphical indicators 312, 314, and 316 are each represented as different rectangles with different dashed or solid lines to distinguish their appearance from one another, such formatting is for illustration purposes only. The different dashed appearances may represent different colors so that the actual user interface presented may have, for example, a blue solid line for indicator 312, a red solid line for indicator 314, and a Huang Shixian for indicator 316. In some embodiments, the color selected for a given indicator may indicate the quality of the scan in which it is relatively determined, such that red indicates an area that may require attention or potential editing by the user. In other embodiments, the color or pattern may not have any significance other than to visually distinguish between different light detection and range scan data sets. The different groups may be scans captured at different times, for example by the same vehicle, or by different vehicles. While the graphical indication Fu Chengshi in the user interface 300 is rectangular, this is not intended to be limiting. In other embodiments, the graphical indicator may be other polygonal, circular, or elliptical shapes, and may have no straight or smooth edges (e.g., the scan area may be closely tracked such that the shape is generally aligned with the shape of the street that the light detection and ranging capture vehicle is traveling over).
The two-dimensional map projection 310 may be generated by a server or map editor device as a two-dimensional overhead projection of light detection and ranging point cloud data captured by a vehicle on the ground. In other embodiments, the two-dimensional map data may be based at least in part on images captured from a camera on the ground (e.g., on a vehicle), in the air, or associated with images captured by satellites. The user may select a point or region in the two-dimensional map projection 310 to view corresponding three-dimensional point cloud data in the left portion of the user interface that includes the three-dimensional point cloud rendering 320. The user may individually rotate, pan, and zoom the two-dimensional or three-dimensional view while the other views remain static. In other embodiments, the other views may automatically adjust to match the translation, scrolling, selection, rotation, or scaling performed by the user in the other view (e.g., scrolling of the point cloud data presented in the three-dimensional point cloud view 320 in the two-dimensional representation 310 may automatically update). The user may zoom in or out of the two-dimensional or three-dimensional view using keyboard shortcuts, scroll wheels, touch screen gestures, or other means. For example, in embodiments other than those shown, buttons or other selectable options may be presented in the user interface 300 to enable scrolling, panning, rotating, selecting, and/or zooming in either view.
FIG. 4 is an illustrative user interface 400 including a magnified view of a three-dimensional point cloud rendering 420 and a two-dimensional map projection 410, including overlaid graphical indicators of nodes and connections within a pose graphic associated with point cloud data. The presented two-dimensional map view 410 may be displayed as a result of a user requesting to zoom in and out on the previously presented two-dimensional map view 310 discussed above with reference to fig. 3. For example, the user interface may be configured to switch between abstract representations or groupings of different patterns of point cloud scanning when a threshold zoom level is reached. For example, upon zooming in to a scale that meets a predetermined threshold, the two-dimensional map representation may change its graphical overlay data to render nodes and corresponding connections (representing graphical nodes and edges, respectively, in the pose graph) rather than higher level abstractions or groupings, such as rectangles or polygons that define regions.
Each displayed node (e.g., nodes 412-415) may represent multiple scans that have been grouped together as graphical nodes in the pose graph during processing (e.g., using ICP). For example, in one embodiment, each graph node may represent twenty adjacent or partially overlapping light detection and ranging scans captured in close proximity to each other in sequence (e.g., one per second). As shown, the nodes represented in the two-dimensional map representation 410 may have different appearances to represent that they are associated with different groups (e.g., captured at different times and/or by different sensors), different pose graphics, or different related sub-graphics. Connections may be presented between different nodes in the same or different groupings to account for the existence of partially overlapping point clouds between them, as well as the existence of sufficient confidence of matching (e.g., another automated process and/or user input determined by ICP) to use them as neighboring groupings when generating a high resolution map. While cross-hatching is used to illustrate the different appearances and groupings of node indicators in the drawings, it should be appreciated that these patterns may represent different colors in an actual user interface.
In the illustrated user interface 400, the user has selected graphical indicators of nodes 414 and 412 that are colored in different colors (shown in different cross-hatching or patterns in the drawing) to illustrate that they are part of different groupings and/or sub-graphs. In response to selecting each node or by selecting the "display selected" option 434, once a node is selected, the three-dimensional point cloud view 420 may be updated to display rendered point cloud data corresponding to the selected node. In a "color mode" (which may be one three-dimensional viewing option within a user interface), the three-dimensional rendering in the three-dimensional point cloud view 420 may be colored to match or correspond to the coloring of the corresponding nodes in the two-dimensional view, or otherwise visually indicate which sets of point cloud data are from different sources or groupings. Upon viewing the relative matches of the point cloud data of node 414 and node 412 in the three-dimensional point cloud view 420 (e.g., groups of cloud points are simultaneously rendered together), the user may determine that there are sufficient matches to add a new connection between nodes 414 and 412 by the user selecting the "connect node" option 430. The user may choose to save this updated graphical data in order to add a new edge between the nodes represented by graphical indicators 414 and 412 in the stored pose graphical data, which will then be used by server 130 and/or map editor device 202 to generate and/or update the high resolution map.
FIG. 5 is an illustrative user interface 500 including three-dimensional point cloud rendering and two-dimensional map projection 510, in which two user-selected nodes have been removed. For example, as mentioned above in connection with fig. 4, a user may view three-dimensional point cloud data associated with the node indicators 414 and 412 discussed above. If it is not determined that the nodes match each other, the user instead determines that their point cloud data should not be used to generate a high resolution map, and the user may select the "remove node" option 512 to delete both nodes. For example, if the user determines that the corresponding point cloud data is of poor quality and/or is redundant with respect to other point cloud data captured at or near the same location, the user may delete the nodes.
If this change is wrong, the user may select the "undo" option 514, otherwise select the "save graphics" option 516 to delete both nodes and associated edges from the stored graphics, or mark the nodes (and their associated point cloud data) in the stored graphics data to ignore in constructing a high resolution map. In some embodiments, the user may determine to delete a node based on a combination of visual information provided by the two-dimensional representation and the three-dimensional rendering. For example, the two-dimensional projection 510 may indicate that two nodes from different graphics or groupings substantially overlap or are in the same location, while the three-dimensional rendering of the point cloud data may provide the user with information about which redundant node is associated with better quality point cloud data.
FIG. 6 is an illustrative user interface 600 including a three-dimensional point cloud rendering and a zoomed-out view of a two-dimensional map projection 610, wherein changes are made to displayed pose graphical data based on user interactions with the user interface. The user selects to add a connection between nodes represented by unconnected graphical indicators 612 and 614 in the user interface 400 presented previously in fig. 4. The user also removes nodes in the same region in order to remove redundant data and optimize point cloud data. Based on these changes, the pose graphics stored in the data store 150 may be changed by the server 130 and/or map editor device 202 to reflect node deletion and addition edges as selected by the user through the user interface 600. Thus, the user can improve the quality and accuracy of the high resolution map that will be subsequently generated based on a combination of the modified pose graphical data and the associated point cloud during map generation.
FIG. 7 is a flow chart of an illustrative method 700 for providing user interface functionality that enables a user to view and edit point cloud and pose graphical data for use in generating a high resolution map. As described above, the map editor device 202 or the server 130 may perform the various steps described herein, depending on the embodiment. Thus, references to the systems in the flowchart descriptions of fig. 7 and 10 may refer to the server 130 or the map editor device 202, depending on the embodiment. Many details of the individual blocks of fig. 7 have been previously described above and thus, in order to avoid repetition, will be summarized below.
At block 702, the system may obtain light detection and ranging scan data and/or other sensor or camera data that may be used to generate a high resolution map. For example, as described above, the obtained sensor data may include radio detection and ranging, infrared camera images, inertial measurement unit data, and the like. At block 704, the system may then assign individual light detection and ranging scans and/or other captured data to nodes in the pose graph. The system may then perform point cloud mapping, filtering, and/or other automatic optimization of the point cloud and/or pose graphics at block 706. These preprocessing or intermediate steps in creating a high resolution map are well known in the art and need not be described in detail herein. For example, the point cloud matching and pose graph construction may be based in part on an iterative closest point algorithm.
At block 708, the system may generate a user interface: which includes an interactive graphical representation of pose graphical data (including nodes and edges) as two-dimensional rendering in a first portion of a user interface. Such a user interface has been described above in connection with fig. 4, for example. Next, at block 710, the system may display an interactive three-dimensional rendering in a second portion of the user interface, as described above in connection with the various user interfaces. At block 712, the system may receive, via the user interface, user edits of at least one point cloud in the three-dimensional rendering or at least one graphical node or edge in the two-dimensional rendering, as described above in connection with the example user interface.
Finally, at block 714, the system may incorporate user edits received via the user interface based on the two-dimensional graphical data and the corresponding three-dimensional point cloud data to generate a high resolution map. Given intermediate data of pose graphics and point clouds, methods of generating high resolution maps therefrom are well known in the art and need not be described herein. However, the additional editing and optimization of intermediate results through the user interface described herein results in an improved high resolution map, as compared to the final step generation of the map without the improved intermediate editing described herein using prior art methods.
FIG. 8 is an illustrative user interface 800 including an enlarged view of a three-dimensional point cloud rendering 820 and a two-dimensional map projection 810, including a display of distance measurements between user-selected points. The user can select (e.g., click a mouse or touch a touch screen using a cursor) any two points within the two-dimensional view 810 or the three-dimensional view 820 to view the distance measurement between the points. For example, the user may select points 821 and 822 in three-dimensional view 820, and then select "three-dimensional measurement" option 825 to present the distance between the two points as measurement 823. The distance may be measured by a computing system (map editor device 202 or server 130) using the (x, y, z) coordinates of each point in the three-dimensional virtual space. The distance may reflect the actual real world distance between the captured light detection and ranging data points and may employ user customizable units of measure and/or scale. Similarly, the user may make measurements in the two-dimensional view 810, such as selecting points 811 and 812, to be presented as measurement 813 after selecting the "two-dimensional measurement" option 815. In some embodiments, the corresponding measurements and points may be automatically added to the view (two-dimensional or three-dimensional) rather than the view in which the points are selected by the user, while in other embodiments, the user may set different measurement points independently in each view. For example, the user may select points 811 and 812 in the two-dimensional view 810 and present as measurement 813. In response to the user selecting points 811 and 812, the three-dimensional view 820 may be automatically updated to display the selection of points 812 and 822 and measurements 823. The automatically selected points in the view outside of the view in which the user selected these points are located may correspond to the same or nearly the same geographic location as the user selected point (e.g., point 821 may be the same geographic location as point 811 and/or point 822 may be the same geographic location as point 812).
FIG. 9 is an illustrative user interface 900 of a three-dimensional point cloud rendering 902 including two point clouds 910 and 912 that enables a user to visually realign or match points in the respective point clouds. The user interface 900 may be considered an "adjustment mode" interface that the user may enter by selecting the "enter adjustment mode" selectable option in the previously presented user interface. In other embodiments, the functionality provided in the adjustment mode may be directly accessible in any three-dimensional point cloud rendered view of the previously described user interface, and may be accessible while the two-dimensional map representation is still present in the view, and may be capable of interacting in the same user interface as the adjustment view.
In some embodiments, the point cloud data 910 and the point cloud data 912 may each represent one or more different light detection and ranging scans, with the real world regions captured by the scans at least partially overlapping each other. For example, the point clouds 910 and 912 may each be associated with a different adjacent graphical indicator selected by the user in the two-dimensional map view for further analysis or editing by the user. Visual information in the two-dimensional view, such as the color of a graphical indicator or shadow present in the area near the node, may indicate to the user: these point clouds may need to be re-coordinated, re-aligned, or manually matched. To better facilitate visually assisted point cloud matching, the point cloud data 910 may be presented in one color, while the point cloud data 912 may be presented in a different color (e.g., contrasting color). The user may select any of the displayed set of point clouds and then may use adjustment control 904 to move the selected point or adjust yaw, pitch, or roll angles. As indicated, the adjustment options may have varying scales (e.g., separate options may be used to move the points of the point cloud by a scale of 0.1 along the x-axis or a scale of 1.0 along the x-axis, in some embodiments, these relative proportions may be adjusted or set by the user.
When adjustment option 904 is presented as a keyboard shortcut (e.g., a user pressing the "1" key moves the selected point cloud left along the x-axis by 0.1 or pressing the key "|" moves the selected point cloud right along the x-axis by 0.1), other input methods may be used in other embodiments. For example, in other embodiments, the user may speak a command (e.g., "1 left", "scroll 0.5") or select a button or other selectable option in the user interface. In some embodiments, the system (e.g., server 130 and/or map editor device 202) may automatically generate hints, tricks, or suggestions as to how the point cloud should be changed to better match each other, and may present these suggestions through text or speech, or may automatically complete the change visually and request user confirmation. For example, the system may identify two or more point clouds with multiple edge misalignments that are less than a threshold distance (e.g., 0.1, 0.5, 1, etc., in the x-axis, y-axis, and/or z-axis) and/or a threshold angle (e.g., 1 °, 5 °, 10 °, etc., in the x-axis, y-axis, and/or z-axis). The system may calculate the amount by which one or more point clouds should change so that the edges are no longer misplaced. As an illustrative example, the system may identify that the two edges of the point clouds 910 and 912 are offset less than a threshold distance and/or a threshold angle. In response, the system may automatically generate hints, tricks, or suggestions as to how to alter the point clouds 910 and 912 to better match each other. Once the user has completed matching or relocating cloud points, the user may select the "Add Current Change" option 920 or the "save all Change" option 922. Selecting the save option may then cause the locations and directions of the new point clouds relative to each other to be stored in the data store 150 for subsequent use by the server 130 and/or the map editor device 202 in generating and/or updating the high resolution map. The user may exit the adjustment mode and return to the previously presented user interface by selecting option 924.
FIG. 10 is a flow chart of an illustrative method 1000 for enabling a user to visually edit the positioning of one or more point clouds for use in generating a high resolution map, which may be viewed as a process of manually or semi-manually matching two sets of point cloud data through an interactive user interface. Since many of the details of the individual blocks of fig. 10 have been described above, they are summarized below in order to avoid repetition. At block 1002, the system may load point cloud data for two or more point clouds based on respective graphical selections by a user through a two-dimensional map view portion of a user interface (similar to the user interface description above). In some embodiments, point cloud data may be retrieved from the data store 150 and loaded into Random Access Memory (RAM) of a server or map editor device, depending on the embodiment, for use by the system in rendering the point cloud data in a three-dimensional virtual space.
At block 1004, the system may render two or more point clouds for display in a user interface, where each point cloud may be generated by a different light detection and ranging scan (or a different set of light detection and ranging scans). For example, the point clouds may be captured at different times, captured by different sensors, or obtained by applying different filtering or preprocessing to the corresponding light detection and ranging data. At block 1006, the system may receive a user selection of one of the two or more point clouds to be manipulated by the user (e.g., move or rotate to better match other displayed point cloud data).
Next, at block 1008, the system may receive one or more commands from the user to move and/or rotate the selected point cloud in the three-dimensional virtual space, as described above in connection with fig. 9. The system may then adjust the display position of the selected point cloud relative to other concurrently displayed point clouds in real-time in response to the user command at block 1010. At block 1012, the system may store the adjusted point cloud location data for use in generating a new high resolution map, such as replacing previously stored data in the data store 150, in response to a user selection (as described above).
Other embodiments are possible within the scope of the invention, such as the components, steps, blocks, operations, and/or messages/requests/queries/instructions described above may be arranged, ordered, subdivided, organized, and/or combined differently, with respect to the figures described herein. In some embodiments, different components may initiate or perform a given operation. For example, it should be appreciated that in other embodiments, operations described as involving collaboration or communication between the server 130 and the map editor device 202 may be implemented entirely by a single computing device (e.g., the server 130 communicating only with the display and user input device, or the map editor device 202 executing only locally stored executable instructions of an application running on the map editor device).
■Example embodiment
Some example enumerated embodiments of the present invention are recited in this paragraph in methods, systems, and non-transitory computer readable media and are not limiting.
In one embodiment, the computer-implemented method described above comprises: point cloud data generated from a plurality of light detection and ranging (LiDAR) scans captured along a plurality of roads is obtained and then grouped to form a plurality of point cloud groupings including at least (a) first grouped point cloud data captured by light detection and ranging in a first geographic area during a first time period and (b) second grouped point cloud data captured by light detection and ranging in the first geographic area during a second time period, wherein at least a first portion of the first grouped point cloud data intersects at least a second portion of the second grouped point cloud data in three-dimensional space. The method may further comprise: a user interface for display is generated, wherein the user interface comprises a two-dimensional map representation of at least a portion of the first geographic area, wherein the two-dimensional map representation is generated as a projection of at least a subset of the point cloud data. The method may then include: a first graphical indicator and a second graphical indicator are superimposed within a two-dimensional map representation within the user interface, wherein the first graphical indicator indicates a first location of first grouped point cloud data within the two-dimensional map representation, and wherein the second graphical indicator indicates a second location of second grouped point cloud data within the two-dimensional map representation, and a magnification request is received through the user interface. In response to the zoom-in request, the method may include updating a display of the two-dimensional map representation to include additional graphical overlay data, wherein the graphical overlay data includes a plurality of node indicators and corresponding connections between the respective node indicators, wherein the plurality of node indicators includes: (a) A first set of node indicators representing nodes in a first pose graph associated with a first set of point cloud data and (b) a second set of node indicators representing nodes in a second pose graph associated with a second set of point cloud data, and then receiving a user selection of at least one node indicator of the first set of node indicators through the user interface, wherein the at least one node indicator represents at least a first node in the first pose graph. The method may further comprise: generating, within a different portion of the user interface than the two-dimensional map representation, a three-dimensional point cloud rendering of point cloud data represented by the at least one node indicator for display; and presenting selectable options for manipulating at least the first pose graphic within the user interface, wherein the selectable options include (1) a first option to remove the first node from the first pose graphic and (2) a second option to edit one or more connections of the at least one node indicator, wherein the editing includes at least one of deleting a connection or adding a connection between the at least one node indicator and a different node indicator of the first or second set of node indicators. The method may include generating altered pose graphics data for at least one of the first pose graphic or the second pose graphic based on a user selection of at least one of the first option or the second option, and generating a high resolution map based on the altered pose graphics data and the point cloud data.
The computer-implemented method described above may further comprise: the high-resolution map is stored in an electronic data store and transmitted over a network to a plurality of vehicles for use in navigating one or more of the plurality of vehicles. The first graphical indicator and the first set of node indicators may be displayed in a first color, wherein the second graphical indicator and the second set of node indicators are displayed in a second color, and wherein the first color is different from the second color.
According to another embodiment, a computer system may include a memory and a hardware processor in communication with the memory and configured with processor-executable instructions to perform particular operations. The operations may include obtaining point cloud data generated by a plurality of light detection and ranging (LiDAR) scans of a geographic area, and then generating a user interface for display, wherein the user interface includes a two-dimensional map representation of at least a portion of the geographic area. The operations may also include overlaying graphical overlay data within the two-dimensional map representation within the user interface, wherein the graphical overlay data includes a plurality of node indicators and corresponding connections between respective node indicators, wherein the plurality of node indicators includes: (a) A first set of node indicators representing nodes in a first pose graph associated with a first set of point cloud data and (b) a second set of node indicators representing nodes in a second pose graph associated with a second set of point cloud data, and then receiving a user selection of at least one node indicator of the first set of node indicators through the user interface, wherein the at least one node indicator represents at least a first node in the first pose graph. The operations may also include, in response to the user selection, generating for display, within a different portion of the user interface than the two-dimensional map representation, a three-dimensional point cloud rendering of the point cloud data represented by the at least one node indicator. The operations may also include presenting, within the user interface, selectable options for manipulating at least the first pose graphic, wherein the selectable options include (1) a first option to remove the first node from the first pose graphic and (2) a second option to edit one or more connections of the at least one node indicator, wherein the editing includes at least one of deleting a connection or adding a connection between the at least one node indicator and a different node indicator of the first or second set of node indicators. The operations may further include generating altered pose graphics data for at least one of the first pose graphics or the second pose graphics based on user selection of at least one of the first option or the second option, and generating a high resolution map based on the altered pose graphics data and the point cloud data.
The operations of the above computer system may further include generating, within a different portion of the user interface than the two-dimensional map representation and while the three-dimensional point cloud rendering of the point cloud data represented by the at least one node indicator is displayed, a second three-dimensional point cloud rendering of the point cloud data represented by a second node indicator selected by the user within the two-dimensional map representation for display, wherein the second node indicator is located in a second set of node indicators.
In one embodiment, the three-dimensional point cloud rendering of the point cloud data represented by the at least one node indicator is displayed in a different color than the second three-dimensional point cloud rendering. In another embodiment, each individual node indicator in the first set of node indicators represents a plurality of light detection and ranging scans captured in proximity to each other. In another embodiment, the user selection belongs to the first option and generates altered pose graphical data comprising removing one or more point clouds associated with the at least one node indicator from consideration by the computer system when generating the high resolution map. In another embodiment, the user selection belongs to the second option and generates altered pose graphical data comprising adding a connection between the at least one node indicator from the first pose graph and a node indicator from the second pose graph. In another embodiment, the two-dimensional map representation is generated as a projection of at least a subset of the point cloud data.
In another embodiment of the above computer system, the user interface further provides a measurement function that enables a user to select any two points in the two-dimensional map representation, and the selection of two points uses the measurement function to cause the computer system to display a connection between the two points and an automatically calculated distance measurement between the two points. In another embodiment, the operations further include automatically updating the three-dimensional point cloud rendering in the user interface to mark a second link within the three-dimensional point cloud rendering at a location in the three-dimensional virtual space corresponding to a location of the link displayed in the two-dimensional map representation. In another embodiment, the initial connection between the respective node indicators is at least partially during an executed point cloud matching process prior to the user interface for display, confidence scores generated by the computer system. In another embodiment, the point cloud matching process includes applying an iterative closest point algorithm.
According to another embodiment, a non-transitory computer-readable medium stores computer-executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform particular operations. The operations may include generating a user interface for display, wherein the user interface includes a two-dimensional map representation of at least a portion of a geographic area, and presenting, in the two-dimensional map representation within the user interface, graphical data including a plurality of node indicators and corresponding connections between the respective node indicators, wherein the plurality of node indicators includes (a) a first set of node indicators representing nodes in a first pose graph associated with the first packet point cloud data and (b) a second set of node indicators representing nodes in a second pose graph associated with the second packet point cloud data. The operations may also include receiving, via a user interface, a user selection of at least one node indicator of the first set of node indicators, wherein the at least one node indicator represents at least a first node in the first pose graph, and generating, within the user interface, a three-dimensional point cloud rendering of point cloud data represented by the at least one node indicator for display in response to the user selection. The operations may also include presenting, within the user interface, selectable options for manipulating at least the first pose graphic, wherein the selectable options include (1) a first option to remove the first node from the first pose graphic and (2) a second option to edit one or more connections of the at least one node indicator, wherein the editing includes deleting the connection or adding at least one of the connections between the at least one node indicator and a different node indicator of the first or second set of node indicators, and then generating altered pose graphical data of at least one of the first or second pose graphic based on user selection of the at least one of the first or second option. The operations may also include generating a high resolution map based on the modified pose graphics data and the point cloud data.
According to one embodiment, referring to the non-transitory computer readable medium above, the first set of node indicators is displayed in a different color than the second set of node indicators to visually indicate a respective pose pattern for each individual node indicator. In another embodiment, each individual node indicator of the first and second sets of node indicators represents a plurality of light detection and ranging scans captured in proximity to each other. In another embodiment, the modified pose graphical data is generated when the user selection belongs to the first option, said pose graphical data comprising one or more point clouds associated with the at least one node indicator removed from consideration by the computer system when generating the high resolution map. In another embodiment, when the user selection belongs to the second option, the modified pose graphical data is generated, said pose graphical data comprising adding a connection between the at least one node indicator from the first pose graph and a node indicator from the second pose graph. In another embodiment, the user interface also provides a measurement function that enables a user to select any two points in the two-dimensional map representation or the three-dimensional point cloud rendering, and selection of two points uses the measurement function to cause the computer system to display a connection between the two points and an automatically calculated distance measurement between the two points.
According to one embodiment, a computer-implemented method described herein includes obtaining point cloud data created based at least in part on a plurality of light detection and ranging (LIDAR) scans of a geographic area, and generating a user interface for display, wherein the user interface includes a two-dimensional map representation of at least a portion of the geographic area, wherein the two-dimensional map is generated as a projection of at least a subset of the point cloud data, wherein the user interface includes a plurality of graphical indicators superimposed within the two-dimensional map representation, and wherein each graphical indicator represents a different set of one or more light detection and ranging scans. The method further includes receiving, by user interaction with a user interface, a user selection of at least a first graphical indicator and a second graphical indicator of the plurality of graphical indicators, wherein the first graphical indicator represents a first set of point cloud data and the second graphical indicator represents a second set of point cloud data, wherein the first set of point cloud data partially intersects at least a portion of the second set of point cloud data in three-dimensional space. The method further includes generating a three-dimensional rendering of the first and second sets of point cloud data, wherein relative display positions of the first and second sets of point cloud data in the three-dimensional rendering visually indicate that a first subset of points of the first set of point cloud data intersects a second subset of points of the second set of point cloud data, wherein the first subset of points is not fully aligned with the second subset of points, and updating the user interface to include a display of the three-dimensional rendering, wherein the first set of point cloud data is displayed in a first color and the second set of point cloud data is displayed in a second color, wherein the first color is different from the second color. The method also includes displaying a plurality of suggested commands within the user interface: one or more user commands for altering the positioning of the first set of point cloud data in the three-dimensional virtual space to better match at least the first subset of points with the second subset of points, and for editing the positioning of at least the first set of point cloud data in the three-dimensional virtual space are received, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data relative to the second set of point cloud data. The method also includes updating, in real-time, a display of at least the first set of point cloud data relative to the second set of point cloud data in response to the one or more user commands, and receiving, via the user interface, an indication to update the stored point cloud data to reflect the one or more user commands. The method may then include storing adjusted point cloud data for at least a first set of point cloud data based on the one or more user commands, and generating a high resolution map of the geographic area, wherein the high resolution map is generated based at least in part on the adjusted point cloud data and other point cloud data from the plurality of light detection and ranging scans.
According to another embodiment, the above computer-implemented method may further comprise: the high-resolution map is stored in an electronic data store and transmitted over a network to a plurality of vehicles for use in navigating one or more of the plurality of vehicles. According to another embodiment, a three-dimensional rendering is presented in the user interface while a two-dimensional map representation of the geographic area is still displayed in the user interface, wherein the three-dimensional rendering is presented in a different portion of the user interface than the two-dimensional map representation. According to another embodiment, the method may further include receiving a selection of a third graphical indicator of the plurality of graphical indicators through user interaction with the two-dimensional map representation, and updating a display of the three-dimensional rendering within the user interface to include a rendering of a third set of point cloud data associated with the third graphical indicator.
According to another embodiment, a computer system may include a memory and a hardware processor in communication with the memory and configured with processor-executable instructions to perform particular operations. The operations may include obtaining a first set of point cloud data and a second set of point cloud data, wherein the first and second sets of point cloud data are each based at least in part on a plurality of light detection and ranging (LiDAR) scans of a geographic area, and generating a three-dimensional rendering of the first set of point cloud data and the second set of point cloud data, wherein relative display positions of the first set of point cloud data and the second set of point cloud data in the three-dimensional rendering visually indicate a partial intersection between a first subset of points of the first set of point cloud data and a second subset of points of the second set of point cloud data, wherein the first subset of points and the second subset of points are not fully aligned. The operations may also include presenting a user interface for display, wherein the user interface includes a three-dimensional rendered display, and displaying a plurality of suggestion commands within the user interface: for altering the positioning of the first set of point cloud data in the three-dimensional virtual space to better match at least the first subset of points with the second subset of points. The operations may also include receiving one or more user commands editing a location of at least the first set of point cloud data in the three-dimensional virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data relative to the second set of point cloud data, and in response to the one or more user commands, updating a display of at least the first set of point cloud data relative to the second set of point cloud data in real-time within the user interface. The operations may also include receiving, via the user interface, an indication to update the stored point cloud data to reflect the one or more user commands, and storing, in the electronic data store, the adjusted point cloud data of the at least first set of point cloud data based on the one or more user commands.
In another embodiment of the above computer system, the first set of point cloud data is displayed in a first color and the second set of point cloud data is displayed in a second color, wherein the first color is different from the second color. In one embodiment, the first set of point cloud data is generated from a plurality of light detection and ranging scans captured in proximity to each other. In one embodiment, the operations further comprise providing, via the user interface, an option to select a command and an associated scale of commands, wherein the scale represents a numerical quantity of at least one of: movement, yaw angle, pitch angle or roll angle. In another embodiment, the advice command includes movement along each of the x-axis, y-axis, and z-axis. In another embodiment, each suggested command is presented along with an indication of the associated keyboard shortcut.
In another embodiment of the above computer system, the one or more user commands are received based on one or more keys of the user input and the display is updated in response to the one or more user commands based in part on a predefined mapping of keys to commands. In another embodiment, the operations further comprise automatically determining a suggested spatial manipulation of the first set of point cloud data to better match at least the first subset of points with the second subset of points. In another embodiment, the operations further include automatically applying a suggestion space manipulation within the three-dimensional rendering displayed in the user interface and prompting a user to agree with the suggestion space manipulation. In yet another embodiment, the suggested spatial manipulation is determined based at least in part on determining that the first set of point cloud data is offset from the second set of point cloud data by less than a threshold, wherein the threshold represents at least one of a distance or an angle.
According to another embodiment, a non-transitory computer-readable medium stores computer-executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform particular operations. The operations may include obtaining a first set of point cloud data and a second set of point cloud data, wherein the first and second sets of point cloud data are each based at least in part on a plurality of light detection and ranging (LiDAR) scans of a geographic area, and generating a three-dimensional rendering of the first set of point cloud data and the second set of point cloud data, wherein a first subset of points of the set of point cloud data at least partially intersects a second subset of points of the second set of point cloud data, and the first subset of points is not fully aligned with the second subset of points. The operations may also include presenting a user interface for display, wherein the user interface includes a three-dimensional rendered display, and presenting a plurality of options: for altering the positioning of the first set of point cloud data in the three-dimensional virtual space to better match at least the first subset of points with the second subset of points. The operations may also include receiving one or more user commands editing a location of at least the first set of point cloud data in the three-dimensional virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data, and in response to the one or more user commands, updating a display of at least the first set of point cloud data relative to the second set of point cloud data in real-time within the user interface. The operations may also include storing the adjusted point cloud data of at least the first set of point cloud data in an electronic data store based on the one or more user commands.
According to one embodiment, referring to the non-transitory computer readable medium above, the plurality of options includes a command and an associated scale for each command, wherein the scale represents a numerical quantity of at least one of: movement, yaw angle, pitch angle or roll angle. In another embodiment, the one or more user commands are received based on one or more keys entered by the user and the display is updated in response to the one or more user commands based in part on a predefined mapping of keys to commands. In another embodiment, the operations further comprise automatically determining a suggested spatial manipulation of the first set of point cloud data that better matches the at least first subset of points with the second subset of points. In another embodiment, the operations include automatically applying a suggested spatial manipulation within a three-dimensional rendering displayed in the user interface. In another embodiment, the suggested spatial manipulation is determined based at least in part on determining that the first set of point cloud data is offset from the second set of point cloud data by less than a threshold, wherein the threshold represents at least one of a distance or an angle.
In other embodiments, one or more systems may operate in accordance with one or more of the methods and/or computer-readable media recited in the preceding paragraphs. In still other embodiments, one or more methods may operate in accordance with one or more of the systems and/or computer-readable media recited in the preceding paragraphs. In still other embodiments, a computer-readable medium or media, excluding transitory propagating signals, may cause one or more computing devices having one or more processors and non-transitory computer-readable memory to operate in accordance with one or more of the systems and/or methods recited in the preceding paragraphs.
■Terminology
Conditional language such as "capable," "possible," "may," or "may," unless specifically stated otherwise or otherwise understood in the context of use, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps must be in any way for one or more embodiments or that one or more embodiments necessarily contain logic for determining: whether such features, elements, and/or steps are included in or performed in any particular embodiment is provided with or without user input or prompting.
In the description and in the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense, rather than in a exclusive or exhaustive sense, i.e. "including but not limited to", unless the context clearly requires otherwise. As used herein, the term "connected," "coupled," or any variant thereof refers to any direct or indirect connection or coupling between two or more elements; the coupling or connection between the elements may be physical, logical, or a combination thereof. Furthermore, the words "herein," "above," "below," and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Words using the singular or plural number may also include the plural or singular number, respectively, where the context permits. When combining a list of two or more items, the word "or" encompasses all of the following interpretations of the word: any one item in the list, all items in the list, and any combination of items in the list. Also, when combining a list of two or more items, the term "and/or" encompasses all of the following interpretations of the word: any one item in the list, all items in the list, and any combination of items in the list.
In some embodiments, certain operations, acts, events, or functions of any of the algorithms described herein can be performed in a different order, may be added, combined, or eliminated entirely (e.g., not all are essential to the implementation of the algorithm). In some embodiments, the operations, acts, functions or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores, or on other parallel architectures, rather than sequentially.
The systems and modules described herein may include software, firmware, hardware, or any combination of software, firmware, or hardware suitable for the purposes described. The software and other modules may reside on and execute on servers, workstations, personal computers, computerized tablets, PDAs, and other computing devices suitable for the purposes described herein. The software and other modules may be accessed through a local computer memory, network, browser, or other means suitable for the purposes described herein. The data structures described herein may include computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods suitable for the purposes described herein, or any combination thereof. The user interface elements described herein may include elements from a graphical user interface, an interactive voice response, a command line interface, and other suitable interfaces.
Furthermore, the processing of the various components of the illustrated system may be distributed across multiple machines, networks, and other computing resources. Two or more components of a system may be combined into fewer components. The various components of the illustrated system may be implemented in one or more virtual machines rather than in a dedicated computer hardware system and/or computing device. Likewise, the data store shown may represent physical and/or logical data storage, including, for example, a storage area network or other distributed storage system. Furthermore, in some embodiments, the connections between the illustrated components represent possible paths of data flow, rather than actual connections between the hardware. Although some examples of possible connections are shown, in various implementations, any subset of the components shown are capable of communicating with one another.
Embodiments are also described above in connection with flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. Each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. Such instructions may be provided to a processor of a general purpose computer, special purpose computer (e.g., including a high performance database server, a graphics subsystem, etc.) or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the actions specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computing device or other programmable data processing apparatus to cause a computer implemented process to be performed on the computing device or other programmable apparatus such that the instructions which execute on the computing device or other programmable apparatus provide steps for implementing the actions specified in the flowchart and/or block diagram block or blocks.
Any of the patents and applications mentioned above, and other references, including any that may be listed in the attached filing documents, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the invention. These and other modifications can be made to the invention in light of the above detailed description. While certain examples of the invention have been described above, and the best mode contemplated, the invention may be practiced in many ways, regardless of the details presented in the text. The details of the system may vary significantly in its particular embodiments while still being encompassed by the invention disclosed herein. As noted above, certain terms used in describing certain features or aspects of the present invention should not be construed to imply that such terms are redefined herein to be limited to any specific characteristics, features, or aspects of the present invention with which such terms are associated. In general, terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification unless such terms are explicitly defined in the above detailed description. Therefore, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims.
In order to reduce the number of claims, certain aspects of the invention are presented in certain claims below, but applicant may contemplate other aspects of the invention in any number of claim forms. For example, although only one aspect of the invention is recited as a means-plus-function claim in accordance with 35U.S. csec.112 (f) (AIA), other aspects may likewise be implemented as a means-plus-function claim, or in other forms, such as in a computer-readable medium. Any claim intended to be processed in accordance with 35u.s.c. ≡112 (f) will start with a "for" statement, but the use of "for" in any other context does not mean that its processing is referenced in accordance with 35u.s.c. ≡112 (f). Accordingly, the applicant reserves the right to append additional claims to this or a later application after filing the application.
Claims (20)
1. A computer-implemented method, comprising:
obtaining point cloud data created based at least in part on multiple light detection and ranging scans of a geographic area;
generating a user interface for display, wherein the user interface comprises a two-dimensional map representation of at least a portion of the geographic area, wherein the two-dimensional map representation is generated as a projection of at least a subset of the point cloud data, wherein the user interface comprises a plurality of graphical indicators superimposed within the two-dimensional map representation, and wherein each of the graphical indicators represents a different set of one or more light detection and ranging scans;
Receiving, by user interaction with the user interface, a user selection of at least a first graphical indicator and a second graphical indicator of the plurality of graphical indicators, wherein the first graphical indicator represents a first set of point cloud data and the second graphical indicator represents a second set of point cloud data, wherein the first set of point cloud data partially intersects at least a portion of the second set of point cloud data in three-dimensional space;
generating a three-dimensional rendering of the first set of point cloud data and the second set of point cloud data, wherein relative display positions of the first set of point cloud data and the second set of point cloud data in the three-dimensional rendering visually convey a partial intersection between a first subset of points of the first set of point cloud data and a second subset of points of the second set of point cloud data, wherein the first subset of points and the second subset of points are not fully aligned;
updating the user interface to include a display of the three-dimensional rendering, wherein the first set of point cloud data is displayed in a first color and the second set of point cloud data is displayed in a second color, wherein the first color is different from the second color;
Displaying, within the user interface, a plurality of suggestion commands for altering the positioning of the first set of point cloud data in a three-dimensional virtual space to better match at least the first subset of points with the second subset of points;
receiving one or more user commands editing a location of at least the first set of point cloud data in the three-dimensional (3D) virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data relative to the second set of point cloud data;
in response to the one or more user commands, updating a display of at least the first set of point cloud data relative to the second set of point cloud data in real-time;
receiving, via the user interface, an indication to update stored point cloud data to reflect the one or more user commands;
storing adjusted point cloud data of at least the first set of point cloud data based on the one or more user commands; and
a high resolution map of the geographic area is generated, wherein the high resolution map is generated based at least in part on the adjusted point cloud data and other point cloud data from the plurality of light detection and ranging scans.
2. The computer-implemented method of claim 1, further comprising:
storing the high resolution map in an electronic data storage device; and
the high resolution map is transmitted over a network to a plurality of vehicles for use in navigating one or more of the plurality of vehicles.
3. The computer-implemented method of claim 1, wherein the three-dimensional rendering is presented in the user interface while the two-dimensional map representation of the geographic area is still displayed in the user interface, wherein the three-dimensional rendering is presented in a portion of the user interface that is different from the two-dimensional map representation.
4. The computer-implemented method of claim 3, further comprising:
receiving a selection of a third graphical indicator of the plurality of graphical indicators through user interaction with the two-dimensional map representation; and
updating a display of the three-dimensional rendering within the user interface to include rendering of a third set of point cloud data associated with the third graphical indicator.
5. A computer system, comprising:
a memory; and
a hardware processor in communication with the memory and configured with processor-executable instructions to perform operations comprising:
Obtaining a first set of point cloud data and a second set of point cloud data, wherein the first set of point cloud data and the second set of point cloud data are each based at least in part on multiple light detection and ranging scans of a geographic area;
generating a three-dimensional rendering of the first set of point cloud data and the second set of point cloud data, wherein relative display positions of the first set of point cloud data and the second set of point cloud data in the three-dimensional rendering visually convey a partial intersection between a first subset of points of the first set of point cloud data and a second subset of points of the second set of point cloud data, wherein the first subset of points and the second subset of points are not fully aligned;
presenting a user interface for display, wherein the user interface includes a display of the three-dimensional rendering;
displaying, within the user interface, a plurality of suggestion commands for altering the positioning of the first set of point cloud data in a three-dimensional virtual space to better match at least the first subset of points with the second subset of points;
receiving one or more user commands to edit the positioning of at least the first set of point cloud data in the three-dimensional virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data relative to the second set of point cloud data;
In response to the one or more user commands, updating, within the user interface, a display of at least the first set of point cloud data relative to the second set of point cloud data in real-time;
receiving, via the user interface, an indication to update stored point cloud data to reflect the one or more user commands; and
based on the one or more user commands, adjusted point cloud data of at least the first set of point cloud data is stored in an electronic data storage device.
6. The computer system of claim 5, wherein the first set of point cloud data is displayed in a first color and the second set of point cloud data is displayed in a second color, wherein the first color is different from the second color.
7. The computer system of claim 5, wherein the first set of point cloud data is generated by a plurality of light detection and ranging scans in proximity to each other.
8. The computer system of claim 5, wherein the operations further comprise providing, via the user interface, an option to select a command and an associated scale of the command, wherein the scale represents a numerical quantity of at least one of: move, yaw, pitch or roll.
9. The computer system of claim 5, the advice command comprising movement along each of an x-axis, a y-axis, and a z-axis.
10. The computer system of claim 5, wherein each of the suggested commands is presented along with an indication of an associated keyboard shortcut.
11. The computer system of claim 5, wherein the one or more user commands are received based on one or more keys entered by a user, and wherein updating the display in response to the one or more user commands is based in part on a predefined mapping of keys to commands.
12. The computer system of claim 5, wherein the operations further comprise automatically determining a suggested spatial manipulation of the first set of point cloud data to better match at least the first subset of points with the second subset of points.
13. The computer system of claim 12, wherein the operations further comprise:
automatically applying the suggested spatial manipulation within the three-dimensional rendering displayed in the user interface; and
prompting the user to agree with the suggested spatial manipulation.
14. The computer system of claim 13, wherein the suggested spatial manipulation is determined based at least in part on a determination that the first set of point cloud data is offset from the second set of point cloud data by less than a threshold, wherein the threshold represents at least one of a distance or an angle.
15. A non-transitory computer-readable medium storing computer-executable instructions that, when executed by one or more computer systems, configure the one or more computer systems to perform operations comprising:
obtaining a first set of point cloud data and a second set of point cloud data, wherein the first set of point cloud data and the second set of point cloud data are each based at least in part on multiple light detection and ranging scans of a geographic area;
generating a three-dimensional rendering of the first set of point cloud data and the second set of point cloud data, wherein a first subset of points of the first set of point cloud data and a second subset of points of the second set of point cloud data at least partially intersect in the three-dimensional rendering, wherein the first subset of points is not fully aligned with the second subset of points;
presenting a user interface for display, wherein the user interface includes a display of the three-dimensional rendering;
presenting a plurality of options for altering the positioning of the first set of point cloud data in a three-dimensional virtual space to better match at least the first subset of points with the second subset of points;
receiving a localization editing at least the first set of point cloud data in the three-dimensional virtual space, wherein the one or more user commands include at least one of a command to move the first set of point cloud data or a command to rotate the first set of point cloud data;
In response to the one or more user commands, updating, within the user interface, a display of at least the first set of point cloud data relative to the second set of point cloud data in real-time; and
based on the one or more user commands, adjusted point cloud data of at least the first set of point cloud data is stored in an electronic data storage device.
16. The non-transitory computer-readable medium of claim 15, wherein the plurality of options includes commands and an associated scale for each command, wherein the scale represents a numerical quantity of at least one of: move, yaw, pitch or roll.
17. The non-transitory computer-readable medium of claim 15, wherein the one or more user commands are received based on one or more keys entered by a user, and wherein updating the display in response to the one or more user commands is based in part on a predefined mapping of keys to commands.
18. The non-transitory computer-readable medium of claim 15, wherein the operations further comprise automatically determining suggested spatial manipulations of the first set of point cloud data to better match at least the first subset of points with the second subset of points.
19. The non-transitory computer-readable medium of claim 18, wherein the operations further comprise:
the suggested spatial manipulation is automatically applied within the three-dimensional rendering displayed in the user interface.
20. The non-transitory computer-readable medium of claim 18, wherein the suggested spatial manipulation is determined based at least in part on determining that the first set of point cloud data is offset from the second set of point cloud data by less than a threshold, wherein the threshold represents at least one of a distance or an angle.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2018/067892 WO2020139373A1 (en) | 2018-12-28 | 2018-12-28 | Interactive 3d point cloud matching |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113748314A CN113748314A (en) | 2021-12-03 |
CN113748314B true CN113748314B (en) | 2024-03-29 |
Family
ID=71129659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880100676.4A Active CN113748314B (en) | 2018-12-28 | 2018-12-28 | Interactive three-dimensional point cloud matching |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113748314B (en) |
WO (1) | WO2020139373A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112021998B (en) * | 2020-07-20 | 2023-08-29 | 科沃斯机器人股份有限公司 | Data processing method, measurement system, autonomous mobile device and cleaning robot |
CN111929694B (en) * | 2020-10-12 | 2021-01-26 | 炬星科技(深圳)有限公司 | Point cloud matching method, point cloud matching equipment and storage medium |
CN112578406B (en) * | 2021-02-25 | 2021-06-29 | 北京主线科技有限公司 | Vehicle environment information sensing method and device |
CN113160405B (en) * | 2021-04-26 | 2025-01-10 | 深圳市慧鲤科技有限公司 | Point cloud map generation method, device, computer equipment and storage medium |
TWI779592B (en) * | 2021-05-05 | 2022-10-01 | 萬潤科技股份有限公司 | Map editing method and device |
US11887271B2 (en) * | 2021-08-18 | 2024-01-30 | Hong Kong Applied Science and Technology Research Institute Company Limited | Method and system for global registration between 3D scans |
CN114322987B (en) * | 2021-12-27 | 2024-02-23 | 北京三快在线科技有限公司 | Method and device for constructing high-precision map |
TWI810809B (en) * | 2022-02-10 | 2023-08-01 | 勤崴國際科技股份有限公司 | Geodetic Coordinate Processing Method for Street Signs |
CN115423942A (en) * | 2022-06-29 | 2022-12-02 | 深圳市镭神智能系统有限公司 | Data processing method, device, equipment and storage medium |
CN115100359A (en) * | 2022-07-13 | 2022-09-23 | 北京有竹居网络技术有限公司 | Image processing method, device, equipment and storage medium |
CN115097977A (en) * | 2022-07-13 | 2022-09-23 | 北京有竹居网络技术有限公司 | Method, apparatus, device and storage medium for point cloud processing |
CN115097976B (en) * | 2022-07-13 | 2024-03-29 | 北京有竹居网络技术有限公司 | Method, device, apparatus and storage medium for image processing |
CN116188660B (en) * | 2023-04-24 | 2023-07-11 | 深圳优立全息科技有限公司 | Point cloud data processing method and related device based on stream rendering |
CN117841988B (en) * | 2024-03-04 | 2024-05-28 | 厦门中科星晨科技有限公司 | Parking control method, device, medium and equipment for unmanned vehicle |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103106680A (en) * | 2013-02-16 | 2013-05-15 | 赞奇科技发展有限公司 | Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system |
CN108765487A (en) * | 2018-06-04 | 2018-11-06 | 百度在线网络技术(北京)有限公司 | Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic |
CN109064506A (en) * | 2018-07-04 | 2018-12-21 | 百度在线网络技术(北京)有限公司 | Accurately drawing generating method, device and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100204964A1 (en) * | 2009-02-09 | 2010-08-12 | Utah State University | Lidar-assisted multi-image matching for 3-d model and sensor pose refinement |
US9024970B2 (en) * | 2011-12-30 | 2015-05-05 | Here Global B.V. | Path side image on map overlay |
EP2923174A2 (en) * | 2012-11-22 | 2015-09-30 | GeoSim Systems Ltd. | Point-cloud fusion |
US9424672B2 (en) * | 2013-11-07 | 2016-08-23 | Here Global B.V. | Method and apparatus for processing and aligning data point clouds |
US10163255B2 (en) * | 2015-01-07 | 2018-12-25 | Geopogo, Inc. | Three-dimensional geospatial visualization |
US20160379366A1 (en) * | 2015-06-25 | 2016-12-29 | Microsoft Technology Licensing, Llc | Aligning 3d point clouds using loop closures |
CN108286976A (en) * | 2017-01-09 | 2018-07-17 | 北京四维图新科技股份有限公司 | The fusion method and device and hybrid navigation system of a kind of point cloud data |
-
2018
- 2018-12-28 WO PCT/US2018/067892 patent/WO2020139373A1/en active Application Filing
- 2018-12-28 CN CN201880100676.4A patent/CN113748314B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103106680A (en) * | 2013-02-16 | 2013-05-15 | 赞奇科技发展有限公司 | Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system |
CN108765487A (en) * | 2018-06-04 | 2018-11-06 | 百度在线网络技术(北京)有限公司 | Rebuild method, apparatus, equipment and the computer readable storage medium of three-dimensional scenic |
CN109064506A (en) * | 2018-07-04 | 2018-12-21 | 百度在线网络技术(北京)有限公司 | Accurately drawing generating method, device and storage medium |
Non-Patent Citations (1)
Title |
---|
陶志鹏 ; 陈志国 ; 王英 ; 吴冰冰 ; 程思琪 ; .海量三维地形数据的实时可视化研究.科技创新与应用.2013,(第30期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN113748314A (en) | 2021-12-03 |
WO2020139373A1 (en) | 2020-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113748314B (en) | Interactive three-dimensional point cloud matching | |
US10976421B2 (en) | Interface for improved high definition map generation | |
US12117307B2 (en) | Interactive 3D point cloud matching | |
US11365976B2 (en) | Semantic label based filtering of objects in an image generated from high definition map data | |
US11874119B2 (en) | Traffic boundary mapping | |
US11644319B2 (en) | Destination changes in autonomous vehicles | |
US11983010B2 (en) | Systems and methods for automated testing of autonomous vehicles | |
CN110832417A (en) | Generating routes for autonomous vehicles using high-definition maps | |
US11186293B2 (en) | Method and system for providing assistance to a vehicle or driver thereof | |
US11361490B2 (en) | Attention guidance for ground control labeling in street view imagery | |
CN113728310A (en) | Architecture for distributed system simulation | |
US11989805B2 (en) | Dynamic geometry using virtual spline for making maps | |
WO2020139377A1 (en) | Interface for improved high definition map generation | |
US20220404823A1 (en) | Lane path modification framework | |
US20220058825A1 (en) | Attention guidance for correspondence labeling in street view image pairs | |
US20220397420A1 (en) | Method and apparatus for providing an updated map model | |
US20240384997A1 (en) | Autonomous mobile unit control system and control method | |
US20240370027A1 (en) | Control system, control method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |