[go: up one dir, main page]

US20230351687A1 - Method for detecting and modeling of object on surface of road - Google Patents

Method for detecting and modeling of object on surface of road Download PDF

Info

Publication number
US20230351687A1
US20230351687A1 US18/208,223 US202318208223A US2023351687A1 US 20230351687 A1 US20230351687 A1 US 20230351687A1 US 202318208223 A US202318208223 A US 202318208223A US 2023351687 A1 US2023351687 A1 US 2023351687A1
Authority
US
United States
Prior art keywords
road
model
vehicle
processor
view image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US18/208,223
Inventor
Haitao Xue
Dongbing Quan
Changhong Yang
James Herbst
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aumovio Germany GmbH
Qualcomm Technologies Inc
Original Assignee
Continental Holding China Co Ltd
Continental Automotive GmbH
Qualcomm Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Holding China Co Ltd, Continental Automotive GmbH, Qualcomm Technologies Inc filed Critical Continental Holding China Co Ltd
Priority to US18/208,223 priority Critical patent/US20230351687A1/en
Assigned to CONTINENTAL AUTOMOTIVE GMBH reassignment CONTINENTAL AUTOMOTIVE GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HERBST, JAMES
Assigned to QUALCOMM TECHNOLOGIES, INC. reassignment QUALCOMM TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONTINENTAL AUTOMOTIVE GMBH
Assigned to Continental Holding China Co., Ltd. reassignment Continental Holding China Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YANG, Changhong, QUAN, Dongbing, XUE, Haitao
Publication of US20230351687A1 publication Critical patent/US20230351687A1/en
Assigned to Continental Automotive Technologies GmbH reassignment Continental Automotive Technologies GmbH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONTINENTAL AUTOMOTIVE GMBH, Continental Holding China Co., Ltd.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096766Systems involving transmission of highway information, e.g. weather, speed limits where the system is characterised by the origin of the information transmission

Definitions

  • the invention relates to relates to a method for detecting and modelling of an object on a surface of a road. Moreover, the disclosure relates to a system for detecting and modelling of an object on a surface of a road.
  • Advanced driver assistance systems and autonomously driving cars re-quire high precision maps of roads and other areas on which vehicles can drive. Determining a vehicle's position on a road or even within a lane of a road with an accuracy of a few centimeters cannot be achieved using conventional satellite navigation systems, for example GPS, Galileo, GLONASS, or other known positioning techniques such as triangulation and the like. However, in particular, when a self-driving vehicle moves on a road with multiple lanes, it needs to exactly determine its lateral and longitudinal position within the lane.
  • One known way to determine a vehicle's position with high precision involves one or more cameras capturing images of road markings/road paints and comparing unique features of road markings/road paints or objects along the road in the captured images with corresponding reference images obtained from a database, in which reference images the respective position of road markings/paints or objects is provided. This way of determining a position provides sufficiently accurate results only when the database provides highly accurate position data with the images and when it is updated regularly or at suitable intervals.
  • Road markings may be captured and registered by special purpose vehicles that capture images of a road while driving, or may be extracted from aerial photographs or satellite images.
  • the latter variant may be considered advantageous since a perpendicular view or top-view image shows little distortion of road markings/paints and other features on substantially flat surfaces.
  • aerial photographs and satellite images may not provide sufficient detail for generating highly accurate maps of road markings/paints and other road features. Also, aerial photographs and satellite images are less suitable for providing details on objects and road features that are best viewed from a ground perspective.
  • the embodiments are providing a method for detecting and modelling of an object on a surface of a road, which allows to determine an accurate three-dimensional (3D) position of the object on the surface of the road.
  • Embodiments may further provide a system for detecting and modelling of an object on a sur-face of a road configured to provide an accurate three-dimensional position of the object on the surface of the road.
  • One embodiments relates to a method for detecting and modelling of an object on a surface of the road, in a first step, the road is scanned. In a subsequent second step, a 3D model of the scanned road is generated. The 3D model contains a description (data representation) of a 3D surface of the road. In a subsequent third step a top-view image of the road is created.
  • the object is detected on the surface of the road by evaluating the top-view image of the road.
  • the detected object is projected on the surface of the road in the 3D model of the scanned road.
  • the object projected on the surface of the road in the 3D model of the scanned road is modelled.
  • a method for detecting and modelling of an object on a surface of a road merges information regarding the 3D road surface and detected objects or road paints on the surface of the road from distributed vehicles driving along the road at different times in order to adjust and refine the road surface estimation and road paint/object detecting.
  • the framework of the method for detecting and modelling of an object on a surface of a road can be divided into four basic parts.
  • a road surface is estimated by each vehicle driving along the road.
  • Each vehicle will report the respective detected road sur-face to a remote server.
  • the remote server the different information obtained from the plurality of vehicles driving along the road are conflated. As a result, a more accurate road surface model is calculated in the remote server.
  • the course of the road captured by a forward-facing camera unit of a vehicle is transformed from the front camera view into a bird's-eye view.
  • an inverse perspective transformation is done first, before part of the image will be extracted to combine into a large image of the complete course of the road.
  • An object on a surface of the road or a road painting will be detected in the top-view/bird's-eye view image of the scanned road.
  • a 3D object/paint projection is performed from the 2D top-view/bird's-eye view image to the 3D model of the road surface.
  • the 3D model of the road is evaluated to obtain a 3D position of the object/road paint and a logical information of the object/road paint.
  • the detected object/road paint on the surface of the road is modelled in a 3D manner.
  • a Non-Uniform Rational B-Spline (NURBS) technique may be used for the 3D modelling of the detected object/road paint.
  • the NURBS curve fitting algorithm can advantageously represent any form of a curve so that the NURBS algorithm allows to represent any object/road paint on the surface of the road precisely.
  • a conventional method for modelling an object/road paint on a surface of a road usually represents a detected object/road paint by polylines which consumes a lot of memory capacitance.
  • the NURBS algorithm will extremely compress the data.
  • An embodiment relates to a system for detecting and modelling of an object on a surface of a road.
  • the system includes a plurality of vehicles driving along the road, and a remote server being spatially located far away from the plurality of the vehicles.
  • Each of the vehicles includes a respective camera unit to scan the road.
  • each of the vehicles may be configured to generate a 3D model of the scanned road.
  • the 3D model contains a description of the sur-face of the road.
  • Each of the vehicles may be configured to create a respective individual top-view of the road and to forward the respective individual top-view of the road to the remote server.
  • the remote server may be configured to create a top-view image of the scanned road by evaluating and conflating the respective individual top-view images of the scanned road.
  • the remote server may further be configured to detect an object on the surface of the road by evaluating the top-view image of the road.
  • the remote server may be configured to project the detected object on the surface of the road in the 3D model of the scanned road.
  • the remote server may further be configured to model the object projected on the surface of the road in the 3D model of the scanned road.
  • FIG. 1 shows a flowchart of a method for detecting and modelling of an object on a surface of a road
  • FIG. 2 shows a simplified block diagram of a system configured to detect and model an object on a surface of a road
  • FIG. 3 A shows a first simplified scene captured by a camera unit and a selection of an area of a captured picture of a road for further processing
  • FIG. 3 B shows a second simplified scene captured by a camera unit and a selection of an area of the captured picture of a road for further processing.
  • FIG. 1 illustrating a sequence of different steps of the method as well as with reference to FIG. 2 illustrating components of a system for detecting and modelling of an object on a surface of a road.
  • step S 1 of the method the road 40 along which a vehicle is driving is scanned or optically examined or scrutinized by the vehicle.
  • a plurality of vehicles 10 a , 10 b and 10 c drive along the road 40 and scan the course of the road during the driving process.
  • each of the vehicles includes a respective optical camera unit 11 .
  • the camera unit 11 may be a vehicle-mounted, forwardly-facing camera.
  • the respective camera unit 11 may include a CCD sensor array.
  • a simple mono-camera may be provided.
  • a stereo camera which may have two or more imaging sensors mounted at a distance (separated) from each other, may be used.
  • FIG. 3 A and FIG. 3 B show two subsequent images 50 a , 50 b of the road 40 captured by the camera unit 11 .
  • a 3D model of the so-scanned road 40 is generated.
  • the 3D model contains a description of a 3D surface of the road 40 .
  • the process of generation of a 3D model of the scanned road 40 is enabled even if the camera I unit 11 is configured as a mono-camera.
  • the generated 3D model of the scanned road 40 may be construed or configured as a point cloud.
  • a dense or semi-dense point cloud may be generated by evaluating the captured pictures with a respective processor unit 12 (of each of the vehicles 10 a , 10 b and 10 c ) while driving along the road.
  • degrees of density of the point cloud may be defined, for example, in accord with the common understanding of such degrees in related art.
  • a point cloud is considered to be sparse when its density is from about 0.5 pts/m 2 to about 1 pts/m 2 ; the density of the low-density point cloud is substantially between 1 pts/m 2 and 2 pts/m 2 ; the medium density point cloud may be characterized by the density of about 2 pts/m 2 to 5 pts/m 2 ; and the high density point cloud has a density from about 5 pts/m 2 to about 10 pts/m 2 .
  • the point cloud is considered to be extremely dense if its density exceeds 10 pts/m 2 .
  • a respective individual 3D model of the scanned road 40 may be generated by each of the vehicles 10 a , 10 b and 10 c .
  • the respective individual 3D model may be forwarded by each of the vehicles 10 a , 10 b and 10 c to a remote server 20 that is located far away (that is, spatially separated from) from these vehicles 10 a , 10 b and 10 c .
  • each of the vehicles 10 a , 10 b and 10 c includes a communication system 13 .
  • Each of the individual 3D models received from the vehicles 10 a , 10 b and 10 c is stored in a storage unit 22 of the remote server 20 .
  • the remote server 20 generates the 3D model of the scanned road 40 by evaluating and conflating (merging) the respective individual 3D models of the scanned road 40 received from the vehicles 10 a , 10 b and 10 c
  • the various point clouds generated by each of the vehicles while driving along the road are matched (that is, fitted, for example by stretching and/or bending the point clouds, as appropriate) by a processor unit 21 of the remote server 20 to provide the 3D model of the road 40 .
  • the 3D model contains information about the road surface so that road surface estimation may be performed by the remote server 20 .
  • An accurate road surface model of the scanned road may be constructed by the processor unit 21 by conflating and matching the various individual 3D models generated by each of the vehicles 10 a , 10 b and 10 c.
  • a top-view/bird's-eye view image of (that is, an image formed a vintage point directly above) the road 40 is created.
  • a respective individual top-view/bird's-eye view image of the scanned road 40 is created by each of the vehicles 10 a , 10 b and 10 c .
  • the respective individual top-view/bird's-eye view image is forwarded by each of the communication systems 13 of the vehicles 10 a , 10 b and 10 c to the remote server 20 .
  • the remote server 20 may create the top-view image of the scanned road 40 by evaluating and conflating the respective individual top-view images of the scanned road 40 .
  • Objects located on the surface of the road for example road paints, may be detected by the processor unit 21 by evaluating the 3D model of the scanned road 40 and the top-view image of the scanned road 40 .
  • FIG. 3 A shows a first image/picture 50 a of a simplified scene as captured by the camera unit 11 of one of the vehicles 10 a , 10 b and 10 c driving along the road 40 .
  • FIG. 3 B shows a second image/picture 50 b of the simplified scene captured by the camera unit 11 of the same of the vehicles 10 a , 10 b and 10 c a short time later than the first picture.
  • a dotted line in each of the captured images 50 a , 50 b designates/surrounds a zone (or region, or portion) of each of the images 50 a , 50 b in which the camera optics of the camera unit 11 cause minimum optical distortion.
  • the zone in which the camera optics cause minimum distortion is located in the central area of each of the captured pictures 50 a , 50 b.
  • FIG. 3 B the vehicle has already moved forward a certain distance (judging by comparison with the scene shown in FIG. 3 A ) so that an object/road paint 60 located on the surface of the road 40 , for example a direction-al arrow, is now repositioned in the foreground.
  • an object/road paint 60 located on the surface of the road 40 for example a direction-al arrow, is now repositioned in the foreground.
  • a traffic sign 30 shown in FIG. 3 A in the background region has moved in a central area of the image 50 b .
  • a respective first area 51 of the captured image 50 a is selected by each of the vehicles 10 a , 10 b and 10 c from the first image 50 a to be is located in a zone of the first image 50 a in which the op-tics of the camera unit 11 cause minimum distortion.
  • a respective second area 52 of the captured image 50 b is selected by each of the vehicles 10 a , 10 b and 10 c from the second image 50 b to be located in a zone of the second image 50 b in which the optics of the camera unit 11 cause minimum distortion.
  • the respective first selected areas 51 are then transformed by each of the vehicles 10 a , 10 b and 10 c to a respective first top-view perspective of the scanned road.
  • the respective second selected areas 52 are then transformed by each of the vehicles 10 a , 10 b and 10 c to respective second top-view perspectives of the scanned road.
  • these respective first and second top-view perspectives are stitched together (for example, with the use of an approach known in the art) by each of the vehicles 10 a , 10 b and 10 c.
  • the transformation to obtain the top-view perspective of the respective selected area and the step of stitching together the top-view perspectives may be executed by the respective processor unit 12 of each of the vehicles 10 a , 10 b and 10 c .
  • the transformation may be, for example, an inverse perspective transformation which transforms each of the areas 51 , 52 from the view of the camera unit 11 into the bird's-eye view.
  • the object/road paint 60 on the surface of the road 40 (illustrated in this example by the directional arrow shown in FIGS. 3 A and 3 B ) is detected by evaluating the top-view image of the road 40 (while searching for objects and/or changes in color and/or contours of colored portions of the top-view image).
  • This step allows to detect objects located on the surface of the road 40 such as road paints or other objects, for example, a cover of a water drain.
  • a step S 5 of the method the detected object 60 is projected on the surface of the road 40 in the 3D model of the scanned road 40 .
  • the pictures 50 a , 50 b of the road captured by the camera unit 11 , the top-view image of the road, and the point cloud of the 3D model of the scanned road are compared and matched by the processor unit 21 of the remote server 20 .
  • the matching process is configured to enable to project a detected object 60 in the 3D model of the scanned road 40 .
  • a 3D position and a logical information about the object 60 is determined after having projected the object 60 detected in the top-view image of the road 40 on the surface of the road 40 in the 3D model of the scanned road.
  • the object 60 projected on the surface of the road 40 in the 3D model of the scanned road is modelled.
  • a mathematical curve fitting algorithm may be used.
  • a Non-Uniform Rational B-Spline (NURBS) technique may be used to perform curve fitting.
  • NURBS methodology can represent any form of a curve so that it is enabled to represent a detected object/road paint precisely.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Traffic Control Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for detecting and modelling of an object on a surface of a road by first scanning the road and generating a 3D model of the scanned road (which 3D model of the scanned road contains a description of a 3D surface of the road) and then creating a top-view image of the road. The object is detected on the surface of the road by evaluating the top-view image of the road. The detected object is projected on the surface of the road in the 3D model of the scanned road. The object projected on the surface of the road in the 3D model of the scanned road is modelled.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 17/344,405, filed Jun. 20, 2021, which is a continuation of International Application No. PCT/CN2018/120886 filed on 13 Dec. 2018, which designates the United States, and of which the disclosures of each are herein incorporated by reference in their entireties.
  • BACKGROUND 1. Field of the Invention
  • The invention relates to relates to a method for detecting and modelling of an object on a surface of a road. Moreover, the disclosure relates to a system for detecting and modelling of an object on a surface of a road.
  • 2. Description of Relevant Art
  • Advanced driver assistance systems and autonomously driving cars re-quire high precision maps of roads and other areas on which vehicles can drive. Determining a vehicle's position on a road or even within a lane of a road with an accuracy of a few centimeters cannot be achieved using conventional satellite navigation systems, for example GPS, Galileo, GLONASS, or other known positioning techniques such as triangulation and the like. However, in particular, when a self-driving vehicle moves on a road with multiple lanes, it needs to exactly determine its lateral and longitudinal position within the lane.
  • One known way to determine a vehicle's position with high precision involves one or more cameras capturing images of road markings/road paints and comparing unique features of road markings/road paints or objects along the road in the captured images with corresponding reference images obtained from a database, in which reference images the respective position of road markings/paints or objects is provided. This way of determining a position provides sufficiently accurate results only when the database provides highly accurate position data with the images and when it is updated regularly or at suitable intervals.
  • Road markings may be captured and registered by special purpose vehicles that capture images of a road while driving, or may be extracted from aerial photographs or satellite images. The latter variant may be considered advantageous since a perpendicular view or top-view image shows little distortion of road markings/paints and other features on substantially flat surfaces.
  • However, aerial photographs and satellite images may not provide sufficient detail for generating highly accurate maps of road markings/paints and other road features. Also, aerial photographs and satellite images are less suitable for providing details on objects and road features that are best viewed from a ground perspective.
  • SUMMARY
  • The embodiments are providing a method for detecting and modelling of an object on a surface of a road, which allows to determine an accurate three-dimensional (3D) position of the object on the surface of the road. Embodiments may further provide a system for detecting and modelling of an object on a sur-face of a road configured to provide an accurate three-dimensional position of the object on the surface of the road.
  • One embodiments relates to a method for detecting and modelling of an object on a surface of the road, in a first step, the road is scanned. In a subsequent second step, a 3D model of the scanned road is generated. The 3D model contains a description (data representation) of a 3D surface of the road. In a subsequent third step a top-view image of the road is created.
  • In a fourth step of the method, the object is detected on the surface of the road by evaluating the top-view image of the road. In a fifth step of the method, the detected object is projected on the surface of the road in the 3D model of the scanned road. In a final sixth step of the method, the object projected on the surface of the road in the 3D model of the scanned road is modelled.
  • Conventional methods of object/road paint detection being located on a surface of a road and modelling the detected object/road paint often provide an inaccurate three-dimensional position of the road paint or the object as well as an incorrect logical information of the road paint or the object on the surface of the road. In particular, since a patch of painting is detected once from every frame captured by a camera system, it is very difficult to get the connectivity between detected results from different frames. In addition, the detected object on the surface of the road or the detected painting may be in arbitrary shape in the real world, so that a conventional method for paint detection and modelling represents it with large error.
  • In an embodiment, a method for detecting and modelling of an object on a surface of a road merges information regarding the 3D road surface and detected objects or road paints on the surface of the road from distributed vehicles driving along the road at different times in order to adjust and refine the road surface estimation and road paint/object detecting. The framework of the method for detecting and modelling of an object on a surface of a road can be divided into four basic parts.
  • In a first part of the method, a road surface is estimated by each vehicle driving along the road. Each vehicle will report the respective detected road sur-face to a remote server. In the remote server, the different information obtained from the plurality of vehicles driving along the road are conflated. As a result, a more accurate road surface model is calculated in the remote server.
  • In a second part of the method, the course of the road captured by a forward-facing camera unit of a vehicle is transformed from the front camera view into a bird's-eye view. In particular, for every frame captured by the camera unit, an inverse perspective transformation is done first, before part of the image will be extracted to combine into a large image of the complete course of the road. An object on a surface of the road or a road painting will be detected in the top-view/bird's-eye view image of the scanned road.
  • In a third part of the method, a 3D object/paint projection is performed from the 2D top-view/bird's-eye view image to the 3D model of the road surface. After having projected a detected object/road paint from the 2D top-view/bird's-eye view image to the 3D model of the road surface, the 3D model of the road is evaluated to obtain a 3D position of the object/road paint and a logical information of the object/road paint.
  • In a last fourth part of the method, the detected object/road paint on the surface of the road is modelled in a 3D manner. As the object/road paint on the surface of the road may have any shape, a Non-Uniform Rational B-Spline (NURBS) technique may be used for the 3D modelling of the detected object/road paint. The NURBS curve fitting algorithm can advantageously represent any form of a curve so that the NURBS algorithm allows to represent any object/road paint on the surface of the road precisely. In comparison to a 3D modelling of an object/road paint by the proposed NURBS curve-fitting algorithm, a conventional method for modelling an object/road paint on a surface of a road usually represents a detected object/road paint by polylines which consumes a lot of memory capacitance. The NURBS algorithm, however, will extremely compress the data.
  • An embodiment relates to a system for detecting and modelling of an object on a surface of a road.
  • In an embodiment, the system includes a plurality of vehicles driving along the road, and a remote server being spatially located far away from the plurality of the vehicles. Each of the vehicles includes a respective camera unit to scan the road. Furthermore, each of the vehicles may be configured to generate a 3D model of the scanned road. The 3D model contains a description of the sur-face of the road. Each of the vehicles may be configured to create a respective individual top-view of the road and to forward the respective individual top-view of the road to the remote server.
  • The remote server may be configured to create a top-view image of the scanned road by evaluating and conflating the respective individual top-view images of the scanned road. The remote server may further be configured to detect an object on the surface of the road by evaluating the top-view image of the road. Furthermore, the remote server may be configured to project the detected object on the surface of the road in the 3D model of the scanned road. The remote server may further be configured to model the object projected on the surface of the road in the 3D model of the scanned road.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the following, the invention will be described by way of example, without limitation of the general inventive concept, on examples of embodiment and with reference to the drawings.
  • FIG. 1 shows a flowchart of a method for detecting and modelling of an object on a surface of a road;
  • FIG. 2 shows a simplified block diagram of a system configured to detect and model an object on a surface of a road;
  • FIG. 3A shows a first simplified scene captured by a camera unit and a selection of an area of a captured picture of a road for further processing, and
  • FIG. 3B shows a second simplified scene captured by a camera unit and a selection of an area of the captured picture of a road for further processing.
  • Generally, the drawings are not to scale. Like elements and components are referred to by like labels and numerals. For the simplicity of illustrations, not all elements and components depicted and labeled in one drawing are necessarily labels in another drawing even if these elements and components appear in such other drawing.
  • While various modifications and alternative forms, of implementation of the idea of the invention are within the scope of the invention, specific embodiments thereof are shown by way of example in the drawings and are described below in detail. It should be understood, however, that the drawings and related detailed description are not intended to limit the implementation of the idea of the invention to the particular form disclosed in this application, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
  • DETAILED DESCRIPTION
  • The method for detecting and modelling of an object on a surface of a road is explained in the following with reference to FIG. 1 illustrating a sequence of different steps of the method as well as with reference to FIG. 2 illustrating components of a system for detecting and modelling of an object on a surface of a road.
  • In step S1 of the method, the road 40 along which a vehicle is driving is scanned or optically examined or scrutinized by the vehicle. In an embodiment of the system shown in FIG. 2 , a plurality of vehicles 10 a, 10 b and 10 c drive along the road 40 and scan the course of the road during the driving process. For this purpose, each of the vehicles includes a respective optical camera unit 11. The camera unit 11 may be a vehicle-mounted, forwardly-facing camera. The respective camera unit 11 may include a CCD sensor array. Preferably, a simple mono-camera may be provided. Alternatively, a stereo camera, which may have two or more imaging sensors mounted at a distance (separated) from each other, may be used. FIG. 3A and FIG. 3B show two subsequent images 50 a, 50 b of the road 40 captured by the camera unit 11.
  • In step S2 of the method, a 3D model of the so-scanned road 40 is generated. The 3D model contains a description of a 3D surface of the road 40. Notably, the process of generation of a 3D model of the scanned road 40 is enabled even if the camera I unit 11 is configured as a mono-camera. The generated 3D model of the scanned road 40 may be construed or configured as a point cloud. In particular, a dense or semi-dense point cloud may be generated by evaluating the captured pictures with a respective processor unit 12 (of each of the vehicles 10 a, 10 b and 10 c) while driving along the road. Here, a person of skill in the art will appreciate that degrees of density of the point cloud may be defined, for example, in accord with the common understanding of such degrees in related art. For example, a point cloud is considered to be sparse when its density is from about 0.5 pts/m2 to about 1 pts/m2; the density of the low-density point cloud is substantially between 1 pts/m2 and 2 pts/m2; the medium density point cloud may be characterized by the density of about 2 pts/m2 to 5 pts/m2; and the high density point cloud has a density from about 5 pts/m2 to about 10 pts/m2. The point cloud is considered to be extremely dense if its density exceeds 10 pts/m2.
  • In an embodiment of the method, a respective individual 3D model of the scanned road 40 may be generated by each of the vehicles 10 a, 10 b and 10 c. The respective individual 3D model may be forwarded by each of the vehicles 10 a, 10 b and 10 c to a remote server 20 that is located far away (that is, spatially separated from) from these vehicles 10 a, 10 b and 10 c. In order to transmit the respective generated individual 3D models of the scanned road 40 to the remote server 20, each of the vehicles 10 a, 10 b and 10 c includes a communication system 13.
  • Each of the individual 3D models received from the vehicles 10 a, 10 b and 10 c is stored in a storage unit 22 of the remote server 20. The remote server 20 generates the 3D model of the scanned road 40 by evaluating and conflating (merging) the respective individual 3D models of the scanned road 40 received from the vehicles 10 a, 10 b and 10 c In particular, the various point clouds generated by each of the vehicles while driving along the road are matched (that is, fitted, for example by stretching and/or bending the point clouds, as appropriate) by a processor unit 21 of the remote server 20 to provide the 3D model of the road 40. The 3D model contains information about the road surface so that road surface estimation may be performed by the remote server 20. An accurate road surface model of the scanned road may be constructed by the processor unit 21 by conflating and matching the various individual 3D models generated by each of the vehicles 10 a, 10 b and 10 c.
  • In step S3 of the method, a top-view/bird's-eye view image of (that is, an image formed a vintage point directly above) the road 40 is created. In particular, a respective individual top-view/bird's-eye view image of the scanned road 40 is created by each of the vehicles 10 a, 10 b and 10 c. The respective individual top-view/bird's-eye view image is forwarded by each of the communication systems 13 of the vehicles 10 a, 10 b and 10 c to the remote server 20. The remote server 20 may create the top-view image of the scanned road 40 by evaluating and conflating the respective individual top-view images of the scanned road 40. Objects located on the surface of the road, for example road paints, may be detected by the processor unit 21 by evaluating the 3D model of the scanned road 40 and the top-view image of the scanned road 40.
  • The creation of the respective individual top-view images of the scanned road 40 by each of the vehicles 10 a, 10 b and 10 c is described in the following with reference to FIGS. 3A and 3B.
  • FIG. 3A shows a first image/picture 50 a of a simplified scene as captured by the camera unit 11 of one of the vehicles 10 a, 10 b and 10 c driving along the road 40. FIG. 3B shows a second image/picture 50 b of the simplified scene captured by the camera unit 11 of the same of the vehicles 10 a, 10 b and 10 c a short time later than the first picture. A dotted line in each of the captured images 50 a, 50 b designates/surrounds a zone (or region, or portion) of each of the images 50 a, 50 b in which the camera optics of the camera unit 11 cause minimum optical distortion. The zone in which the camera optics cause minimum distortion is located in the central area of each of the captured pictures 50 a, 50 b.
  • As a given vehicle moves forward, features in the scene move towards (approach) the vehicle from the front and ultimately pass the vehicle, leaving the boundaries of the scene defined by the field-of-view of the camera unit 11. As illustrated in FIG. 3B, the vehicle has already moved forward a certain distance (judging by comparison with the scene shown in FIG. 3A) so that an object/road paint 60 located on the surface of the road 40, for example a direction-al arrow, is now repositioned in the foreground. Similarly, a traffic sign 30 shown in FIG. 3A in the background region has moved in a central area of the image 50 b. As shown in FIGS. 3A and 3B, a sequence of images—in this example, of at least a first respective individual picture 50 a and a second respective individual picture 50 b—is captured with a time delay by the respective camera unit 11 of each of the vehicles 10 a, 10 b and 10 c. A respective first area 51 of the captured image 50 a is selected by each of the vehicles 10 a, 10 b and 10 c from the first image 50 a to be is located in a zone of the first image 50 a in which the op-tics of the camera unit 11 cause minimum distortion. Furthermore, a respective second area 52 of the captured image 50 b is selected by each of the vehicles 10 a, 10 b and 10 c from the second image 50 b to be located in a zone of the second image 50 b in which the optics of the camera unit 11 cause minimum distortion.
  • The respective first selected areas 51 are then transformed by each of the vehicles 10 a, 10 b and 10 c to a respective first top-view perspective of the scanned road. Furthermore, the respective second selected areas 52 are then transformed by each of the vehicles 10 a, 10 b and 10 c to respective second top-view perspectives of the scanned road. In order to create the respective individual top-view/bird's-eye view image, these respective first and second top-view perspectives are stitched together (for example, with the use of an approach known in the art) by each of the vehicles 10 a, 10 b and 10 c.
  • The transformation to obtain the top-view perspective of the respective selected area and the step of stitching together the top-view perspectives may be executed by the respective processor unit 12 of each of the vehicles 10 a, 10 b and 10 c. The transformation may be, for example, an inverse perspective transformation which transforms each of the areas 51, 52 from the view of the camera unit 11 into the bird's-eye view. As a result of stitching the respective top-view perspective with one another by various vehicles, the individual views of the same road—from the points of view of various vehicles—are formed, which are position-dependent.
  • In the step S4 of the method, the object/road paint 60 on the surface of the road 40 (illustrated in this example by the directional arrow shown in FIGS. 3A and 3B) is detected by evaluating the top-view image of the road 40 (while searching for objects and/or changes in color and/or contours of colored portions of the top-view image). This step allows to detect objects located on the surface of the road 40 such as road paints or other objects, for example, a cover of a water drain.
  • In a step S5 of the method, the detected object 60 is projected on the surface of the road 40 in the 3D model of the scanned road 40. In order to perform the projecting step to effectuate the mathematical projection via one of the known methods; as defined in linear algebra, in one example), the pictures 50 a, 50 b of the road captured by the camera unit 11, the top-view image of the road, and the point cloud of the 3D model of the scanned road are compared and matched by the processor unit 21 of the remote server 20.
  • The matching process is configured to enable to project a detected object 60 in the 3D model of the scanned road 40. In one embodiment, a 3D position and a logical information about the object 60 is determined after having projected the object 60 detected in the top-view image of the road 40 on the surface of the road 40 in the 3D model of the scanned road.
  • In the step S6 of the method, the object 60 projected on the surface of the road 40 in the 3D model of the scanned road is modelled. For this purpose, a mathematical curve fitting algorithm may be used. In particular, a Non-Uniform Rational B-Spline (NURBS) technique may be used to perform curve fitting. This so-called NURBS methodology can represent any form of a curve so that it is enabled to represent a detected object/road paint precisely.
  • It will be appreciated by those skilled in the art having the benefit of this disclosure that implementations of invention are believed to provide a method for detecting and modelling of an object on a surface of a road. Further modifications and alternative embodiments of various aspects of the invention will be apparent to those skilled in the art in view of this description. Accordingly, this description is to be construed as illustrative only and is provided for the purpose of teaching those skilled in the art the general manner of carrying out the invention. It is to be understood that the forms of the invention shown and described herein are to be taken as the presently preferred embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed, and certain features of the invention may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description of the invention. Changes may be made in the elements described herein without departing from the spirit and scope of the invention as described in the following claims.

Claims (20)

1. A vehicle comprising:
a camera unit configured to capture one or more images of a road; and
a processor configured to:
generate a 3D model of the road based at least on the one or more images, the 3D model representing at least a surface of the road;
generate a top-view image of the road based on the 3D model;
detect an object in the top-view image of the road;
project the detected object onto the surface of the road in the 3D model; and
generate a model of the object projected onto the surface of the road in the 3D model.
2. The vehicle of claim 1, wherein the processor is configured to determine a future direction of travel of the vehicle based on the generated model.
3. The vehicle of claim 1, wherein the processor is configured to determine a position of the vehicle based on the generated model.
4. The vehicle of claim 1, wherein the vehicle is a self-driving vehicle.
5. The vehicle of claim 1, wherein the processor is configured to transmit the 3D model of the road to a remote server.
6. The vehicle of claim 5, wherein the processor is configured to transmit the top-view image of the road to a remote server.
7. The vehicle of claim 6, wherein the processor is configured to receive, from the remote server, a second model of the object projected onto the surface of the road in a second 3D model.
8. The vehicle of claim 7, wherein the processor is configured to determine a future direction of travel of the vehicle based on the second model.
9. The vehicle of claim 7, wherein the processor is configured to determine a position of the vehicle based on the second model.
10. A method for generating a model of an object on a road, comprising:
generating a 3D model of the road based at least on one or more images, the 3D model representing at least a surface of the road;
generating a top-view image of the road based on the 3D model;
detecting an object in the top-view image of the road;
projecting the detected object onto the surface of the road in the 3D model; and
generating a model of the object projected onto the surface of the road in the 3D model.
11. The method of claim 10, comprising determining a future direction of travel of a vehicle based on the generated model.
12. The method of claim 10, comprising determining a position of a vehicle based on the generated model.
13. The method of claim 10, comprising transmitting the 3D model of the road to a remote server.
14. The method of claim 13, comprising transmitting the top-view image of the road to a remote server.
15. The method of claim 13, comprising receiving, from the remote server, a second model of the object projected onto the surface of the road in a second 3D model.
16. The method of claim 15, comprising determining a position of a vehicle based on the second model.
17. An apparatus comprising:
a memory; and
a processor communicatively coupled to the memory, the processor configured to:
generate an aggregate top-view image of a road based on a plurality of top-view images;
detect an object in the aggregate top-view image of the road;
generate an aggregate 3D model based on the plurality of 3D models, the aggregate 3D model representing at least a surface of the road;
project the detected object onto the surface of the road in the aggregate 3D model; and
generate a model of the object projected onto the surface of the road in the aggregate 3D model.
18. The apparatus of claim 17, wherein the processor is configured to:
receive a plurality of 3D models of the road from one or more vehicles; and
generate each of the plurality of top-view images based on a corresponding one of the plurality of 3D models of the road.
19. The apparatus of claim 17, wherein the processor is configured to receive the plurality of top-view images from one or more vehicles.
20. The apparatus of claim 17, wherein the processor is configured to transmit the generated model to at least one vehicle.
US18/208,223 2018-12-13 2023-06-09 Method for detecting and modeling of object on surface of road Abandoned US20230351687A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/208,223 US20230351687A1 (en) 2018-12-13 2023-06-09 Method for detecting and modeling of object on surface of road

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/120886 WO2020118619A1 (en) 2018-12-13 2018-12-13 Method for detecting and modeling of object on surface of road
US17/344,405 US11715261B2 (en) 2018-12-13 2021-06-10 Method for detecting and modeling of object on surface of road
US18/208,223 US20230351687A1 (en) 2018-12-13 2023-06-09 Method for detecting and modeling of object on surface of road

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/344,405 Continuation US11715261B2 (en) 2018-12-13 2021-06-10 Method for detecting and modeling of object on surface of road

Publications (1)

Publication Number Publication Date
US20230351687A1 true US20230351687A1 (en) 2023-11-02

Family

ID=71076195

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/344,405 Active 2039-05-15 US11715261B2 (en) 2018-12-13 2021-06-10 Method for detecting and modeling of object on surface of road
US18/208,223 Abandoned US20230351687A1 (en) 2018-12-13 2023-06-09 Method for detecting and modeling of object on surface of road

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/344,405 Active 2039-05-15 US11715261B2 (en) 2018-12-13 2021-06-10 Method for detecting and modeling of object on surface of road

Country Status (7)

Country Link
US (2) US11715261B2 (en)
EP (1) EP3895135A4 (en)
JP (1) JP2022513830A (en)
KR (1) KR20210102953A (en)
CN (1) CN113196341A (en)
CA (1) CA3122865A1 (en)
WO (1) WO2020118619A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220157196A (en) 2021-05-20 2022-11-29 삼성전자주식회사 Method for processing image and electronic device thereof
US12406395B2 (en) * 2021-12-07 2025-09-02 Adasky, Ltd. Vehicle to infrastructure extrinsic calibration system and method
US20250200809A1 (en) * 2022-03-21 2025-06-19 Lg Electronics Inc. Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
KR102701224B1 (en) * 2023-04-25 2024-08-30 국방과학연구소 Data processing apparatus and method for predicting traversable regions and traversal cost from 3d lidar point clouds and images in various unstructured environments

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008099915A1 (en) * 2007-02-16 2008-08-21 Mitsubishi Electric Corporation Road/feature measuring device, feature identifying device, road/feature measuring method, road/feature measuring program, measuring device, measuring method, measuring program, measured position data, measuring terminal, measuring server device, drawing device, drawing method, drawing program, and drawing data
DE102014208664A1 (en) * 2014-05-08 2015-11-12 Conti Temic Microelectronic Gmbh METHOD AND DEVICE FOR DISABLING DISPLAYING A VEHICLE ENVIRONMENT ENVIRONMENT
CN104766366B (en) * 2015-03-31 2019-02-19 东北林业大学 A method for establishing a 3D virtual reality presentation
CN105069395B (en) * 2015-05-17 2018-10-09 北京工业大学 Roadmarking automatic identifying method based on Three Dimensional Ground laser scanner technique
EP3131020B1 (en) * 2015-08-11 2017-12-13 Continental Automotive GmbH System and method of a two-step object data processing by a vehicle and a server database for generating, updating and delivering a precision road property database
CN105719284B (en) * 2016-01-18 2018-11-06 腾讯科技(深圳)有限公司 A kind of data processing method, device and terminal
CN105678285B (en) * 2016-02-18 2018-10-19 北京大学深圳研究生院 A kind of adaptive road birds-eye view transform method and road track detection method
AU2017300097B2 (en) * 2016-07-21 2022-03-10 Mobileye Vision Technologies Ltd. Crowdsourcing and distributing a sparse map, and lane measurements for autonomous vehicle navigation
US20180067494A1 (en) * 2016-09-02 2018-03-08 Delphi Technologies, Inc. Automated-vehicle 3d road-model and lane-marking definition system
CN110832348B (en) 2016-12-30 2023-08-15 辉达公司 Point cloud data enrichment for high-definition maps of autonomous vehicles
WO2019000417A1 (en) * 2017-06-30 2019-01-03 SZ DJI Technology Co., Ltd. Map generation systems and methods
US10699135B2 (en) * 2017-11-20 2020-06-30 Here Global B.V. Automatic localization geometry generator for stripe-shaped objects

Also Published As

Publication number Publication date
CN113196341A (en) 2021-07-30
US20210304492A1 (en) 2021-09-30
US11715261B2 (en) 2023-08-01
WO2020118619A1 (en) 2020-06-18
EP3895135A4 (en) 2022-08-24
KR20210102953A (en) 2021-08-20
JP2022513830A (en) 2022-02-09
EP3895135A1 (en) 2021-10-20
CA3122865A1 (en) 2020-06-18

Similar Documents

Publication Publication Date Title
US20230351687A1 (en) Method for detecting and modeling of object on surface of road
CN110832275B (en) System and method for updating high-resolution map based on binocular image
De Silva et al. Fusion of LiDAR and camera sensor data for environment sensing in driverless vehicles
CN110859044B (en) Integrated sensor calibration in natural scenes
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
US10909395B2 (en) Object detection apparatus
US12405365B2 (en) Environment model using cross-sensor feature point referencing
CN111856491A (en) Method and apparatus for determining the geographic location and orientation of a vehicle
Parra et al. Robust visual odometry for vehicle localization in urban environments
Vu et al. Traffic sign detection, state estimation, and identification using onboard sensors
JP2017181476A (en) Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection
CN113874681B (en) Point cloud map quality assessment method and system
US20210304518A1 (en) Method and system for generating an environment model for positioning
NL2016718B1 (en) A method for improving position information associated with a collection of images.
Novikov et al. Vehicle geolocalization from drone imagery
WO2022133986A1 (en) Accuracy estimation method and system
Igaue et al. Cooperative 3D tunnel measurement based on 2D–3D registration of omnidirectional laser light

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUALCOMM TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CONTINENTAL AUTOMOTIVE GMBH;REEL/FRAME:063937/0398

Effective date: 20210601

Owner name: CONTINENTAL HOLDING CHINA CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XUE, HAITAO;QUAN, DONGBING;YANG, CHANGHONG;SIGNING DATES FROM 20210625 TO 20210628;REEL/FRAME:063936/0995

Owner name: CONTINENTAL AUTOMOTIVE GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HERBST, JAMES;REEL/FRAME:063989/0451

Effective date: 20210705

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: CONTINENTAL AUTOMOTIVE TECHNOLOGIES GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CONTINENTAL AUTOMOTIVE GMBH;CONTINENTAL HOLDING CHINA CO., LTD.;REEL/FRAME:071099/0011

Effective date: 20250318

Owner name: CONTINENTAL AUTOMOTIVE TECHNOLOGIES GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNOR'S INTEREST;ASSIGNORS:CONTINENTAL AUTOMOTIVE GMBH;CONTINENTAL HOLDING CHINA CO., LTD.;REEL/FRAME:071099/0011

Effective date: 20250318

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION COUNTED, NOT YET MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION