WO2013186160A1 - Closed loop 3d video scanner for generation of textured 3d point cloud - Google Patents
Closed loop 3d video scanner for generation of textured 3d point cloud Download PDFInfo
- Publication number
- WO2013186160A1 WO2013186160A1 PCT/EP2013/061895 EP2013061895W WO2013186160A1 WO 2013186160 A1 WO2013186160 A1 WO 2013186160A1 EP 2013061895 W EP2013061895 W EP 2013061895W WO 2013186160 A1 WO2013186160 A1 WO 2013186160A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- textured
- data
- positional
- scanning
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 46
- 230000000007 visual effect Effects 0.000 claims abstract description 9
- 238000003032 molecular docking Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 description 12
- 238000009434 installation Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000003908 quality control method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/87—Combinations of systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/87—Combinations of systems using electromagnetic waves other than radio waves
- G01S17/875—Combinations of systems using electromagnetic waves other than radio waves for determining attitude
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
Definitions
- the present invention describes a method for recording and presenting visual information related to an object or structure. More specific the invention is described by a method for presenting more than one specific visual information regarding an object or structure.
- the solution may include interactive maps and drawings, navigation capabilities and integrated video. Further, the solution may also include 3D point cloud or 3D mesh model.
- the 3D mesh model may be generated from the 3D point cloud or from other information sources, such as drawings, 360 ° panoramic photographs, still photographs, etc.
- a 3D model represent a 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data points and other information, 3D models can be created by hand, or scanned.
- a 3D point cloud is a set of vertices in a three-dimensional coordinate system.
- Point clouds are usually defined by X, Y, and Z coordinates, and is typically intended to be representative of the external surface of an object.
- Point clouds are most often created by 3D scanners. These devices measure in an automatic way a large number of points on the surface of an object, and often output a point cloud as a data file. The point cloud represents the set of points that the device has measured.
- point clouds are used for many purposes, including to create 3D CAD models for objects, quality inspection, and a multitude of visualization, animation, rendering and mass customization applications. While point clouds can be directly rendered and inspected, usually point clouds themselves are generally not directly usable in most 3D applications, and therefore are usually converted to polygon or triangle mesh models
- a 3D mesh model or a polygon mesh or unstructured grid is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D solid modelling.
- the faces usually consist of triangles, quadrilaterals or other simple convex polygons.
- the present invention is described by a method for generating 3D point cloud data from a scanning device comprising one or more of infrared transmitter and receiver sensors combined with one or more RGB (Red, Green, Blue, with no transparency effects) or RGBA (Red, Green, Blue, Alpha, RGB with opacity) video sensors or other similar device for providing texture of the 3D point cloud. From measuring multiple points a complete 3D point cloud is generated.
- one or more RGBA video sensors record and provide a texture of the points in the point cloud.
- the present invention is a system where the textured point cloud is generated in real time on a computer, e.g. portable computer, tablet etc. This allows all areas of interest to be scanned, even areas of tight space where stationary systems are unable to be positioned.
- a technology capable of compensating for movement of the system during operation is required to create a 3D point cloud where all the points are correctly positioned or referenced in space.
- the scanning device may be periodically linked to a physical reference station or docking station with known positional reference parameters to enable fine tuning of the positional data recorded during scanning.
- Other means of enhancing the precision of the scanned 3D points are also presented as part of the invention.
- the 3D Video scanner in one embodiment of the invention is a portable device and linked to a computer and generating a textured 3D point cloud in real time, the data generated can be viewed during the scanning process.
- This link may be hard wired or through other means of communication between the scanner device and the computer, such as wireless communication. Consequently, as a part of a quality control process, full real time view of captured data is available without post processing thus greatly improving the process of ensuring a full data set.
- the 3D Video scanner may be equipped with an additional computer screen fixed to the scanning device and linked to a portable computer, such that a preview of scanned data can be observed by the operator during the scanning process even if the portable computer is stored in a carrying device or stored in the vicinity of the 3D Video scanner with wireless or other communication means.
- the 3D Video scanner may be equipped with pattern recognition software capable of comparing the frame instantly observed by the device with previous recorded data. This is helpful when an area of missing data is observed. The scanning system will then recognize where the start of a new scan needs to commence to fill in areas of missing data, and tying new data to a previously scanned data set, thus providing a complete textured 3D point could.
- the 3D Video scanner software may further include algorithms and software processing means to develop a 3D mesh model from the 3D point cloud data.
- the 3D mesh model may be produced in real time, or near real time, allowing a complete 3D mesh model to be produced in parallel with the generation of the 3D point cloud.
- the 3D Video scanner software is capable of guiding the operator by identifying areas with missing data where additional data is required to generate a complete 3D point cloud, or by comparison to previous 3D point cloud data identify areas where changes or modifications to the structure or object has occurred, requiring a new scan to be performed.
- a maximum allowable distance between points may be defined as a criterion to identify areas of insufficient data collection.
- a maximum size of computed polygons or triangles may be defined as the similar criterion. In both cases, the software may identify areas of insufficient data based on the criterion, and guide the operator to the same areas for obtaining additional information of the area.
- the portable 3D Video scanner may also be operated in combination with a fixed point laser scanner where the portable 3D Video scanner may provide point cloud information to fill in the missing data not visible from the fixed point laser scanner.
- the precision of the positional data recorded during scanning will be enhanced by the closed loop recalculation process which again will increase precision of the 3D point cloud data.
- the technology offered with the present invention will increase efficiency both during generation of the 3D point cloud and from being able to see and visualise the installation in an improved way. Communication and collaboration between an installation and support organization will be greatly improved and enhanced with the aid of the textured 3D point cloud view of an installation.
- the present invention is defined by a method for recording and presenting visual and positional information related to an object or structure by using a 3D scanning device. Said method comprises the following steps:
- the invention is also defined by an apparatus for recording and presenting visual and positional information of a textured point cloud related to an object or structure, said apparatus is a portable device with one or more laser scanning devices, one or more video sensor devices, and a motion compensation device.
- the invention is defined by a method for generating a textured 3D point cloud in real time.
- the first step of the method is capturing both 3D point cloud data and texture data simultaneously by means of said scanning device directed at said structure or object. This is achieved by scanning a structure or object by moving a 3D Video scanner in or around the structure or object of interest to provide a dataset of textured point cloud data in 3D.
- the second step is receiving said 3D point cloud and texture data on a computer system and producing a textured point cloud of said structure or object.
- the third step is measuring movement of said scanning device during scanning operation, thus enabling compensation for possible movement.
- the forth step is calculating possible positional drift during scanning operation by comparing a first pass and a second pass of the same object and if positional drift then compensating for this by generating compensation data, and
- the last step in the inventive method is providing and presenting a textured 3D point cloud of said structure or object based textured point cloud with possible compensation for movement of said scanning device and positional drift during scanning operation.
- the last step of the method will ensure that accurate data is used when generating textured 3D point cloud in real time.
- Figure 1 shows the 3D video scanner principle
- Figure 2 shows 3D scanner with a single portable sensor unit
- Figure 3 shows a 3D video scanner diagram with several sensor packages.
- Figure 1 shows the 3D video scanner principle.
- the present invention is described by a method for generating 3D point cloud data from a scanning device (3D Video scanner Sensor Package) comprising one or more of infrared transmitters and receivers (IFR) combined with a RGBA video sensor or other similar device for providing texture of the 3D point cloud.
- the infrared transmitters and receivers would emit and receive a single frequency signal to determine the x, y and z components of the points in space by measuring the distance to the object as well as the angles relative to the horizontal and vertical planes. From measuring multiple points a complete point cloud is generated. Simultaneously one or more RGBA video sensors record and provide a texture of the points in the point cloud.
- One or more sensor packages comprising one or more infrared transmitters, one or more infrared receivers and one or more RGBA video sensors are connected to a power source and a portable computer to allow capture and generation of textured point cloud in real time.
- the 3D Video scanner may further include one or more LED lights for improving the light conditions for the scanning and video sensors.
- the LED light will cover a field of view comparable to or larger than the field of view of the scanning device.
- the present invention is a mobile system.
- a stationary based system operated from a tripod will only be capable of defining the points in space that are visible by the sensor from the location of the stationary device. Consequently there will be empty space in shadows behind structures with no points defined. The amount of empty areas where no data exist will be reduced by obtaining 3D point cloud
- the textured point cloud is generated in real time on a portable computer.
- the portable computer may be wired to the scanning system or placed nearby with wireless or other communication means. Consequently the complete system is a portable system which can be carried around in the area to be scanned.
- Figure 2 shows a single sensor unit according to one embodiment of the present invention. It show a complete portable device performing the inventive method.
- This portable device allows all areas to be scanned, even areas of tight space where a stationary system is unable to be positioned.
- One embodiment of the present invention is a mobile system with motion sensing technology. As the scanning device will move during operation it is necessary to use motion detection sensors or other positional reference means to constantly define the point in space and direction of each of the sensor packages. It is sufficient to have a single positional reference within the scanning device, as the positional and directional offset to each of the sensor packages remain constant during scanning and can be compensated for to define the positional information for each sensor package.
- FIG. 3 illustrate that the 3D video scanner can be connected to several different sensor packages or devices.
- the present invention may comprise a tri-axial accelerometer device connected to sensor electronics for providing data input for the tracking of positional data of the scanner during the scanning operation, with computation of the relevant position data done in real time.
- a tri-axial accelerometer device connected to sensor electronics for providing data input for the tracking of positional data of the scanner during the scanning operation, with computation of the relevant position data done in real time.
- gyroscopic or other type of positional reference systems may be used.
- the accuracy and precision of all scanned points is improved by rescanning a previously scanned area to define the drift and offset of the positional data observed between a first and a second pass.
- a pattern recognition software technology is used to compared the first and second pass and define the positional drift between said first and said second pass.
- the positional data of all points scanned between the first and second pass are then recalculated by assuming a constant positional drift between the passes. It is understood that data from more than a first and second pass may be acquired to further enhance the precision of the positional data.
- the drift of the positional data may be compensated for by either constantly link the scanned data to a reference source, for instance an object in the view of the scanner with known positional properties, or to link the scanned data to a reference source such as a positional docking station once during the scanning process, provide motion detection information to compensate for the movement of the scanning device during operation, and then subsequently position the scanning device in same positional docking station for drift calculation.
- Said docking station may be a physical device for placing said scanning device to provide said closed loop positional reference, or a defined object with known positional reference data observed and identified by the scanner during a first and second p iSS, iS described above.
- Another aspect of the present invention is that it does not scan single points one at the time, but it scans a larger area simultaneously, defined as a frame.
- This can be thought of as for instance a traditional video camera which has a certain field of view.
- a textured 3D point cloud is continuously built on the portable computer in real time. This allows the progression of the 3D point cloud generation to be viewed to provide quality control of the data.
- pattern recognition software may be included.
- the scanning principle builds on the assumption each frame will be close to the previous frame and consequently only marginally differ from the previously recorded frame, both in shape and position.
- a high frame rate (recorded frames per second) improves precision of the scanning process as well as allowing a faster scanning process.
- the pattern recognition capacity will provide a means of fine tuning where the scanning will be continued and thereby obtaining a seamless scan with no glitches.
- the 3D Video scanner software may be capable of guiding the operator by identifying areas of insufficient or missing data and displaying on the portable screen arrows or indicators to point the operator in the direction where additional data is required to generate a complete 3D point cloud with no missing data.
- the fundamental identification of missing or insufficient data may be by defining maximum space between the points in the point cloud, or by producing a 3D mesh model in real time and defining the maximum size of the triangles or polygons or other shapes in the 3D mesh model. The latter will be preferred as a large space between points in the 3D point cloud could mean there is no data and only air between the points, which in this case should be correctly represented as no points.
- a surface may be incorrectly defined between the largely spaced points indicating a surface where there is only air. Typically this will be a large surface as the distance between the points is large, which can be identified within the system as potential insufficient or missing data.
- a search function could be developed whereby areas of no or missing data would be indicated on the computer screen by the means of arrows or other indicators, leading the operators to areas where additional scanning is required to obtain a complete 3D point cloud.
- Another development is the ability of loading the system with 3D point cloud data from a previously generated data set and comparing the previous data set with the new data set obtained with the 3D Video scanner.
- 3D point cloud By comparison of the old and new 3D point cloud, areas where the structure or object has been modified could be identified and highlighted in the model. This could be done after the new scan was completed, or in real-time to assist the operator in identifying such critical areas where the quality of the new scanned data was of vital importance.
- the comparison could be presented by the new point cloud data transparently overlaying the old point cloud data in the same viewer, or by two viewers side by side. It is understood that more than two sets of point cloud data could be compared in one or more viewers.
- the textured 3D point cloud may be presented in a viewer capable of viewing said textured 3D point cloud, capable of performing dimensional measurements within said textured 3D point cloud, capable of simultaneously displaying 360 degree panoramic photographs within said viewer or within a second viewer presented in parallel within the same system, or in combination with other visual data such as still photographs, documents, video, URL links, 3D animations and 2D maps.
- the 2D maps may also include radar, arrow or other means for indicating current direction and/coordinates related to object or installation viewed.
- coordinates When presenting coordinates, these may be linked to a reference system for the object to be presented. Coordinates could be included in map and/or picture functions. This would allow the location of a 360° picture, a camera or any other relevant item or information source to be displayed in a viewer or within the textured point cloud. The coordinates could be linked to a reference system for an installation being presented in the viewer and consequently all information presented with coordinates in the viewer will be referenced to the coordinate system for the installation.
- Coordinates could be displayed when moving the mouse or any other marker around in the map or in a 360° panoramic picture.
- the coordinates of the pointing device will be displayed in an information window or directly on the map.
- a function could be developed where the coordinates of a point within the 360° picture is displayed in an information window or directly within the 360° picture, on the map or another location.
- a camera/video system(s) can be integrated in a system with a separate viewer for presenting picture(s) and/or video.
- the first viewer can present a textured 3D point cloud or 3D mesh model
- a second viewer can present a 360° picture
- a third viewer can be a video film.
- a first viewer can present a 360° picture, and within the area covered by this 360° picture there can be one or more cameras or video cameras.
- the photographs or video recorded by the cameras can then be presented in the viewers.
- Distances to and coordinates of a viewed object in the video camera can also be determined by using laser scanning, providing a video camera is equipped with relevant sensors.
- a 360° panoramic picture can be integrated into a 3D point cloud or mesh model created by laser scanning.
- One way to do this is by defragmenting a 360° panoramic picture into numerous smaller pictures each representing a defined part of the original 360° panoramic picture.
- Each part of the defragmented 360° picture can then be integrated into a 3D model created by laser scanning at its appropriate place, thus recreating the 360° picture in a 3D point cloud or mesh model.
- a video camera used could have a GPS or other suitable measurement system for indoor use such as an inertial navigation system for recording positional coordinates of the camera and/or a directional device to give the direction the camera is pointing.
- a measurement system could record positional data relative to a known reference point in the area.
- the coordinates and/or the direction of the camera position could be transmitted in real time to the system performing the inventive method.
- the system could display both video and coordinates/direction of a camera position. For instance, if a mobile camera system is used, the system could receive and display both live video and current positional coordinates from the camera. Live video could be displayed in a viewer together with coordinates. A map in a viewer could be updated and display camera position and/or direction at the same time.
- the present invention can be used as a communication and visualisation tool for various work processes, as well as a tool for engineering and planning of changes and modifications to the structure or installation.
- Allowing a remote located user onshore viewing textured 3D point cloud, textured 3D mesh model, photographs, video or other information of the installation will limit the need to travel to the installation. In other cases it will make a trip shorter and more efficient by enabling better planning and preparing prior to a site visit.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Length Measuring Devices With Unspecified Measuring Means (AREA)
Abstract
Method and apparatus for recording and presenting more than one specific visual information regarding an object or structure by generating a textured 3D point cloud with a 3D Video scanner.
Description
Closed loop 3D Video scanner for generation of textured 3D point cloud Introduction
The present invention describes a method for recording and presenting visual information related to an object or structure. More specific the invention is described by a method for presenting more than one specific visual information regarding an object or structure.
Background
There are different types of solutions for displaying visual information regarding an object. This includes presenting high quality panoramic pictures, integrated photographs to enhance details, user-friendly menus and informative text. The solution may include interactive maps and drawings, navigation capabilities and integrated video. Further, the solution may also include 3D point cloud or 3D mesh model. The 3D mesh model may be generated from the 3D point cloud or from other information sources, such as drawings, 360° panoramic photographs, still photographs, etc.
A 3D model represent a 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data points and other information, 3D models can be created by hand, or scanned.
A 3D point cloud is a set of vertices in a three-dimensional coordinate system.
These vertices are usually defined by X, Y, and Z coordinates, and is typically intended to be representative of the external surface of an object. Point clouds are most often created by 3D scanners. These devices measure in an automatic way a large number of points on the surface of an object, and often output a point cloud as a data file. The point cloud represents the set of points that the device has measured. As the result of a 3D scanning process point clouds are used for many purposes, including to create 3D CAD models for objects, quality inspection, and a multitude of visualization, animation, rendering and mass customization applications. While point clouds can be directly rendered and inspected, usually point clouds themselves are generally not directly usable in most 3D applications, and therefore are usually converted to polygon or triangle mesh models
A 3D mesh model or a polygon mesh or unstructured grid is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D solid modelling. The faces usually consist of triangles, quadrilaterals or other simple convex polygons.
The present invention is described by a method for generating 3D point cloud data from a scanning device comprising one or more of infrared transmitter and receiver sensors combined with one or more RGB (Red, Green, Blue, with no transparency effects) or RGBA (Red, Green, Blue, Alpha, RGB with opacity) video sensors or other similar device for providing texture of the 3D point cloud. From measuring multiple points a complete 3D point cloud is generated. Simultaneously one or more RGBA video sensors record and provide a texture of the points in the point cloud. The present invention is a system where the textured point cloud is generated in real time on a computer, e.g. portable computer, tablet etc. This allows all areas of interest to be scanned, even areas of tight space where stationary systems are unable to be positioned.
To create a functional mobile system, a technology capable of compensating for movement of the system during operation is required to create a 3D point cloud where all the points are correctly positioned or referenced in space. Further, to enhance the precision of the positional data recorded during scanning, the scanning device may be periodically linked to a physical reference station or docking station with known positional reference parameters to enable fine tuning of the positional data recorded during scanning. Other means of enhancing the precision of the scanned 3D points are also presented as part of the invention. As the 3D Video scanner in one embodiment of the invention is a portable device and linked to a computer and generating a textured 3D point cloud in real time, the data generated can be viewed during the scanning process. This link may be hard wired or through other means of communication between the scanner device and the computer, such as wireless communication. Consequently, as a part of a quality control process, full real time view of captured data is available without post processing thus greatly improving the process of ensuring a full data set.
Furthermore, in another embodiment the 3D Video scanner may be equipped with an additional computer screen fixed to the scanning device and linked to a portable computer, such that a preview of scanned data can be observed by the operator during the scanning process even if the portable computer is stored in a carrying device or stored in the vicinity of the 3D Video scanner with wireless or other communication means.
The 3D Video scanner may be equipped with pattern recognition software capable of comparing the frame instantly observed by the device with previous recorded data. This is helpful when an area of missing data is observed. The scanning system will then recognize where the start of a new scan needs to commence to fill in areas of missing data, and tying new data to a previously scanned data set, thus providing a complete textured 3D point could.
The 3D Video scanner software may further include algorithms and software processing means to develop a 3D mesh model from the 3D point cloud data. The 3D mesh model may be produced in real time, or near real time, allowing a complete 3D mesh model to be produced in parallel with the generation of the 3D point cloud.
The 3D Video scanner software is capable of guiding the operator by identifying areas with missing data where additional data is required to generate a complete 3D point cloud, or by comparison to previous 3D point cloud data identify areas where changes or modifications to the structure or object has occurred, requiring a new scan to be performed. In the 3D point cloud a maximum allowable distance between points may be defined as a criterion to identify areas of insufficient data collection. In the 3D mesh model a maximum size of computed polygons or triangles may be defined as the similar criterion. In both cases, the software may identify areas of insufficient data based on the criterion, and guide the operator to the same areas for obtaining additional information of the area.
The portable 3D Video scanner may also be operated in combination with a fixed point laser scanner where the portable 3D Video scanner may provide point cloud information to fill in the missing data not visible from the fixed point laser scanner.
There are several advantages of the present invention over prior art. It is a time efficient and cost effective method capable of providing textured 3D point cloud in areas where traditional stationary laser scanners do not reach. This is both a result of the 3D Video scanner being very fast compared to traditional technology and also because it will be capable of scanning areas where traditional scanners cannot reach from their fixed position. The efficiency of the 3D Video scanner is a result of scanning an area as opposed to a single point at the time, and also a result of simultaneously capturing texture data. Traditional laser scanners does not provide texture, only black and white image as the scanner works on a single frequency. To provide texture a 360 photograph can be captured from the same position and later combined with the scan during post processing, which is a time consuming process. Also, the precision of the positional data recorded during scanning will be enhanced by the closed loop recalculation process which again will increase precision of the 3D point cloud data. The technology offered with the present invention will increase efficiency both during generation of the 3D point cloud and from being able to see and visualise the installation in an improved way. Communication and collaboration between an installation and support organization will be greatly improved and enhanced with the aid of the textured 3D point cloud view of an installation.
The quality of planning for remote activities, planning and engineering of modifications of the installation at a site will improve with better visualisation, thus reducing or eliminating the need for travel to an installation. It will provide
improved Health, Safety and Environment operations by having engineers and support personnel visualising areas subject to discussions instead of letting them physically be located at a site.
Summary of the invention
The present invention is defined by a method for recording and presenting visual and positional information related to an object or structure by using a 3D scanning device. Said method comprises the following steps:
- recording both 3D point cloud data and texture data simultaneously by means of said scanning device directed at said structure or object;
- receiving said 3D point cloud and texture data on a computer system and producing a textured point cloud of said structure or object;
- measuring movement of said scanning device during scanning operation, thus enabling compensation for possible movement;
- calculating possible positional drift during scanning operation by comparing a first pass and a second pass of the same object and if positional drift then compensating for this by generating compensation data, and
- providing and presenting a textured 3D point cloud of said structure or object based textured point cloud with possible compensation for movement of said scanning device and positional drift during scanning operation.
Further features of the inventive method are defined in the dependent claims.
The invention is also defined by an apparatus for recording and presenting visual and positional information of a textured point cloud related to an object or structure, said apparatus is a portable device with one or more laser scanning devices, one or more video sensor devices, and a motion compensation device.
Further features are defined in the dependent claims. Detailed description
The invention is defined by a method for generating a textured 3D point cloud in real time.
The first step of the method is capturing both 3D point cloud data and texture data simultaneously by means of said scanning device directed at said structure or object. This is achieved by scanning a structure or object by moving a 3D Video scanner in or around the structure or object of interest to provide a dataset of textured point cloud data in 3D.
The second step is receiving said 3D point cloud and texture data on a computer system and producing a textured point cloud of said structure or object.
The third step is measuring movement of said scanning device during scanning operation, thus enabling compensation for possible movement. The forth step is calculating possible positional drift during scanning operation by comparing a first pass and a second pass of the same object and if positional drift then compensating for this by generating compensation data, and
The last step in the inventive method is providing and presenting a textured 3D point cloud of said structure or object based textured point cloud with possible compensation for movement of said scanning device and positional drift during scanning operation. The last step of the method will ensure that accurate data is used when generating textured 3D point cloud in real time.
The present invention will now be described in detail with reference to the figures where: Figure 1 shows the 3D video scanner principle;
Figure 2 shows 3D scanner with a single portable sensor unit, and Figure 3 shows a 3D video scanner diagram with several sensor packages.
Figure 1 shows the 3D video scanner principle. The present invention is described by a method for generating 3D point cloud data from a scanning device (3D Video scanner Sensor Package) comprising one or more of infrared transmitters and receivers (IFR) combined with a RGBA video sensor or other similar device for providing texture of the 3D point cloud. The infrared transmitters and receivers would emit and receive a single frequency signal to determine the x, y and z components of the points in space by measuring the distance to the object as well as the angles relative to the horizontal and vertical planes. From measuring multiple points a complete point cloud is generated. Simultaneously one or more RGBA video sensors record and provide a texture of the points in the point cloud. One or more sensor packages comprising one or more infrared transmitters, one or more infrared receivers and one or more RGBA video sensors are connected to a power source and a portable computer to allow capture and generation of textured point cloud in real time.
The 3D Video scanner may further include one or more LED lights for improving the light conditions for the scanning and video sensors. Typically, the LED light will cover a field of view comparable to or larger than the field of view of the scanning device.
In contrast to existing systems for generating 3D point cloud where the laser scanner with the infrared transmitter and receiver is placed stationary on a tripod, the present invention is a mobile system. A stationary based system operated from a tripod will only be capable of defining the points in space that are visible by the sensor from the location of the stationary device. Consequently there will be empty space in shadows behind structures with no points defined. The amount of empty areas where no data exist will be reduced by obtaining 3D point cloud
measurements from multiple stations in the same area and linking the data from these stations to reduce the amount of no or absent data. In the present invention the textured point cloud is generated in real time on a portable computer. The portable computer may be wired to the scanning system or placed nearby with wireless or other communication means. Consequently the complete system is a portable system which can be carried around in the area to be scanned.
Figure 2 shows a single sensor unit according to one embodiment of the present invention. It show a complete portable device performing the inventive method.
This portable device allows all areas to be scanned, even areas of tight space where a stationary system is unable to be positioned.
One embodiment of the present invention is a mobile system with motion sensing technology. As the scanning device will move during operation it is necessary to use motion detection sensors or other positional reference means to constantly define the point in space and direction of each of the sensor packages. It is sufficient to have a single positional reference within the scanning device, as the positional and directional offset to each of the sensor packages remain constant during scanning and can be compensated for to define the positional information for each sensor package.
Figure 3 illustrate that the 3D video scanner can be connected to several different sensor packages or devices.
The present invention may comprise a tri-axial accelerometer device connected to sensor electronics for providing data input for the tracking of positional data of the scanner during the scanning operation, with computation of the relevant position data done in real time. Alternatively gyroscopic or other type of positional reference systems may be used.
In a preferred embodiment the accuracy and precision of all scanned points is improved by rescanning a previously scanned area to define the drift and offset of the positional data observed between a first and a second pass. A pattern recognition software technology is used to compared the first and second pass and define the positional drift between said first and said second pass. The positional data of all points scanned between the first and second pass are then recalculated by assuming a constant positional drift between the passes. It is understood that data from more
than a first and second pass may be acquired to further enhance the precision of the positional data.
Alternatively, the drift of the positional data may be compensated for by either constantly link the scanned data to a reference source, for instance an object in the view of the scanner with known positional properties, or to link the scanned data to a reference source such as a positional docking station once during the scanning process, provide motion detection information to compensate for the movement of the scanning device during operation, and then subsequently position the scanning device in same positional docking station for drift calculation. Said docking station may be a physical device for placing said scanning device to provide said closed loop positional reference, or a defined object with known positional reference data observed and identified by the scanner during a first and second p iSS, iS described above.
Another aspect of the present invention is that it does not scan single points one at the time, but it scans a larger area simultaneously, defined as a frame. This can be thought of as for instance a traditional video camera which has a certain field of view. As scanning progresses, a textured 3D point cloud is continuously built on the portable computer in real time. This allows the progression of the 3D point cloud generation to be viewed to provide quality control of the data. In the present invention pattern recognition software may be included. The scanning principle builds on the assumption each frame will be close to the previous frame and consequently only marginally differ from the previously recorded frame, both in shape and position. A high frame rate (recorded frames per second) improves precision of the scanning process as well as allowing a faster scanning process. Furthermore, if the scanning process is stopped for whatever reason, it can be continued from approximately the same place and the pattern recognition capacity will provide a means of fine tuning where the scanning will be continued and thereby obtaining a seamless scan with no glitches.
The 3D Video scanner software may be capable of guiding the operator by identifying areas of insufficient or missing data and displaying on the portable screen arrows or indicators to point the operator in the direction where additional data is required to generate a complete 3D point cloud with no missing data. The fundamental identification of missing or insufficient data may be by defining maximum space between the points in the point cloud, or by producing a 3D mesh model in real time and defining the maximum size of the triangles or polygons or other shapes in the 3D mesh model. The latter will be preferred as a large space between points in the 3D point cloud could mean there is no data and only air between the points, which in this case should be correctly represented as no points. However, in a 3D mesh model covering the same area, a surface may be incorrectly defined between the largely spaced points indicating a surface where there is only
air. Typically this will be a large surface as the distance between the points is large, which can be identified within the system as potential insufficient or missing data. A search function could be developed whereby areas of no or missing data would be indicated on the computer screen by the means of arrows or other indicators, leading the operators to areas where additional scanning is required to obtain a complete 3D point cloud.
Another development is the ability of loading the system with 3D point cloud data from a previously generated data set and comparing the previous data set with the new data set obtained with the 3D Video scanner. By comparison of the old and new 3D point cloud, areas where the structure or object has been modified could be identified and highlighted in the model. This could be done after the new scan was completed, or in real-time to assist the operator in identifying such critical areas where the quality of the new scanned data was of vital importance. The comparison could be presented by the new point cloud data transparently overlaying the old point cloud data in the same viewer, or by two viewers side by side. It is understood that more than two sets of point cloud data could be compared in one or more viewers.
The textured 3D point cloud may be presented in a viewer capable of viewing said textured 3D point cloud, capable of performing dimensional measurements within said textured 3D point cloud, capable of simultaneously displaying 360 degree panoramic photographs within said viewer or within a second viewer presented in parallel within the same system, or in combination with other visual data such as still photographs, documents, video, URL links, 3D animations and 2D maps. The 2D maps may also include radar, arrow or other means for indicating current direction and/coordinates related to object or installation viewed.
When presenting coordinates, these may be linked to a reference system for the object to be presented. Coordinates could be included in map and/or picture functions. This would allow the location of a 360° picture, a camera or any other relevant item or information source to be displayed in a viewer or within the textured point cloud. The coordinates could be linked to a reference system for an installation being presented in the viewer and consequently all information presented with coordinates in the viewer will be referenced to the coordinate system for the installation.
Coordinates could be displayed when moving the mouse or any other marker around in the map or in a 360° panoramic picture. When moving the cursor or mouse around in the map, the coordinates of the pointing device will be displayed in an information window or directly on the map. When moving the pointing device within a 360° picture, a function could be developed where the coordinates of a
point within the 360° picture is displayed in an information window or directly within the 360° picture, on the map or another location.
As said, a camera/video system(s) can be integrated in a system with a separate viewer for presenting picture(s) and/or video. The first viewer can present a textured 3D point cloud or 3D mesh model, a second viewer can present a 360° picture and a third viewer can be a video film. There could be one or more of each of the viewer types, for instance one 360° picture viewer and one or more video viewers. A first viewer can present a 360° picture, and within the area covered by this 360° picture there can be one or more cameras or video cameras. The photographs or video recorded by the cameras can then be presented in the viewers. Distances to and coordinates of a viewed object in the video camera can also be determined by using laser scanning, providing a video camera is equipped with relevant sensors. A 360° panoramic picture can be integrated into a 3D point cloud or mesh model created by laser scanning. One way to do this is by defragmenting a 360° panoramic picture into numerous smaller pictures each representing a defined part of the original 360° panoramic picture. Each part of the defragmented 360° picture can then be integrated into a 3D model created by laser scanning at its appropriate place, thus recreating the 360° picture in a 3D point cloud or mesh model.
A video camera used could have a GPS or other suitable measurement system for indoor use such as an inertial navigation system for recording positional coordinates of the camera and/or a directional device to give the direction the camera is pointing. Alternatively a measurement system could record positional data relative to a known reference point in the area. The coordinates and/or the direction of the camera position could be transmitted in real time to the system performing the inventive method. The system could display both video and coordinates/direction of a camera position. For instance, if a mobile camera system is used, the system could receive and display both live video and current positional coordinates from the camera. Live video could be displayed in a viewer together with coordinates. A map in a viewer could be updated and display camera position and/or direction at the same time.
As understood from the description presentation above, the present invention can be used as a communication and visualisation tool for various work processes, as well as a tool for engineering and planning of changes and modifications to the structure or installation.
Allowing a remote located user onshore viewing textured 3D point cloud, textured 3D mesh model, photographs, video or other information of the installation will
limit the need to travel to the installation. In other cases it will make a trip shorter and more efficient by enabling better planning and preparing prior to a site visit.
Claims
1. Method for recording and presenting visual and positional information related to an object or structure by using a 3D scanning device,
characterized in the following steps: a) capturing both 3D point cloud data and texture data simultaneously by means of said scanning device directed at said structure or object;
b) receiving said 3D point cloud and texture data on a computer system and producing a textured point cloud of said structure or object; c) measuring movement of said scanning device during scanning operation, thus enabling compensation for possible movement; d) calculating possible positional drift during scanning operation by comparing a first pass and a second pass of the same object and if positional drift then compensating for this by generating compensation data, and e) providing and presenting a textured 3D point cloud of said structure or object based textured point cloud with possible compensation for movement of said scanning device and positional drift during scanning operation.
2. Method according to claim 1, characterized in that the 3D point cloud data of the structure or object is determined by using a portable laser scanning device.
3. Method according to claim 1, characterized in that the texture of the scanned structure or object is determined by a video sensor.
4. Method according to claim 1, characterized in that a textured point cloud is generated by operating a portable device to access all areas of interest without the limitation of the recording device being stationary during operation.
5. Method according to claim 1, characterized in that a textured point cloud is generated with a combination of a fixed point laser scanner and a portable 3D Video scanner to provide point cloud information of the areas not visible from the fixed point laser scanner.
6. Method according to claim 1, characterized in generating a textured 3D mesh model from said textured 3D point cloud data.
7. Method according to claim 1, characterized in that positional drift is compensated for by recalculating positions of all points in between the first pass and a second pass of the same object to provide a closed loop.
8. Method according to claim 1, characterized in that positional drift is compensated for by letting the scanning device be periodically linked to a physical reference station or docking station with known positional reference parameters to enabling generation of compensation data.
9. Method according to claim 2 and 3, characterized in that a textured point cloud is generated by simultaneously recording of distance and positional data from the laser scanning and texture from the video sensor.
10. Method according to claim 4, characterized in that a textured point cloud is generated in real-time.
11. Method according to claim 4, characterized in that a textured point cloud is generated in real-time by a portable device comprising one or more laser scanning devices and one or more video sensor devices.
12. Method according to claim 4, characterized in that a textured point cloud is generated by a portable device comprising one or more laser scanning devices, one or more video sensor devices, and a motion compensation device.
13. Method according to claim 4, characterized in that a textured point cloud is generated in real-time by a portable device comprising one or more laser scanning and video sensor devices, a motion compensation device that are connected to a portable computer.
14. Method according to claim 4, characterized in that a textured point cloud is generated in real-time by a portable device comprising one or more laser scanning and video sensor devices, a motion compensation device, connected to a portable computer, and using a pattern recognition software to control the generation of the textured point cloud.
15. Method according to claim 14, characterized in that a pattern recognition software is used to identify positional drift between a first pass and a second pass of the same area, and the positional reference of the point cloud data recorded in between these passes is recalculated to compensate for said positional drift.
16. Method according to claim 14, characterized in that a pattern recognition software is used to identify areas of no or missing data of the textured point cloud.
17. Method according to claim 16, characterized in that areas of no or missing data of the textured point cloud is identified in real-time.
18. Method according to claim 16, characterized in that areas of no or missing data of the textured point cloud is identified in real-time and presented to the operator with a search function.
19. Method according to claim 16 and 18, characterized in that areas of no or missing data of the textured point cloud is identified by analyzing the 3D mesh model to identify areas where the maximum size of the triangles or polygons or other shapes in the 3D mesh model exceeds predefined criteria to indicate missing data.
20. An apparatus for recording and presenting visual and positional information of a textured point cloud related to an object or structure,
characterized in comprising:
- a portable device with one or more laser scanning devices,
- one or more video sensor devices, and
- a motion compensation device.
21. An apparatus according to claim 20, characterized in further comprising:
- one or more infrared transmitters;
- one or more infrared receivers;
- one or more RGB or RGBA video receivers;
- a tri-axial accelerometer device, and
- a power supply, a PC, electronics and software.
22. An apparatus according to claim 20, where said motion compensation device comprises a GPS device and/or a gyroscopic sensor device.
23. An apparatus according to claim 20, where said motion compensation device comprises an inertial navigation sensor device.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IE2012/0272 | 2012-06-11 | ||
IE20120272A IE86364B1 (en) | 2012-06-11 | 2012-06-11 | Closed loop 3D video scanner for generation of textured 3D point cloud |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013186160A1 true WO2013186160A1 (en) | 2013-12-19 |
Family
ID=48613603
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2013/061895 WO2013186160A1 (en) | 2012-06-11 | 2013-06-10 | Closed loop 3d video scanner for generation of textured 3d point cloud |
Country Status (2)
Country | Link |
---|---|
IE (1) | IE86364B1 (en) |
WO (1) | WO2013186160A1 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016039868A1 (en) * | 2014-09-10 | 2016-03-17 | Faro Technologies, Inc. | A device and method for optically scanning and measuring an environment and a method of control |
US20160274589A1 (en) * | 2012-09-26 | 2016-09-22 | Google Inc. | Wide-View LIDAR With Areas of Special Attention |
US9602811B2 (en) | 2014-09-10 | 2017-03-21 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device |
US9618620B2 (en) | 2012-10-05 | 2017-04-11 | Faro Technologies, Inc. | Using depth-camera images to speed registration of three-dimensional scans |
GB2544934A (en) * | 2014-09-10 | 2017-05-31 | Faro Tech Inc | A device and method for optically scanning and measuring an environment and a method of control |
US9671221B2 (en) | 2014-09-10 | 2017-06-06 | Faro Technologies, Inc. | Portable device for optically measuring three-dimensional coordinates |
US9684078B2 (en) | 2010-05-10 | 2017-06-20 | Faro Technologies, Inc. | Method for optically scanning and measuring an environment |
US9693040B2 (en) | 2014-09-10 | 2017-06-27 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device |
US10070116B2 (en) | 2014-09-10 | 2018-09-04 | Faro Technologies, Inc. | Device and method for optically scanning and measuring an environment |
US10067231B2 (en) | 2012-10-05 | 2018-09-04 | Faro Technologies, Inc. | Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner |
US10175037B2 (en) | 2015-12-27 | 2019-01-08 | Faro Technologies, Inc. | 3-D measuring device with battery pack |
US10220172B2 (en) | 2015-11-25 | 2019-03-05 | Resmed Limited | Methods and systems for providing interface components for respiratory therapy |
US10528840B2 (en) | 2015-06-24 | 2020-01-07 | Stryker Corporation | Method and system for surgical instrumentation setup and user preferences |
CN114127579A (en) * | 2019-07-16 | 2022-03-01 | 博迪戴特公司 | System and method for improving radar scan coverage and efficiency |
US11350077B2 (en) | 2018-07-03 | 2022-05-31 | Faro Technologies, Inc. | Handheld three dimensional scanner with an autoaperture |
WO2022153653A1 (en) * | 2021-01-13 | 2022-07-21 | パナソニックIpマネジメント株式会社 | Distance-measuring system |
JP2022550495A (en) * | 2020-06-30 | 2022-12-02 | シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド | Data processing method, device, equipment, storage medium and program |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090323121A1 (en) * | 2005-09-09 | 2009-12-31 | Robert Jan Valkenburg | A 3D Scene Scanner and a Position and Orientation System |
-
2012
- 2012-06-11 IE IE20120272A patent/IE86364B1/en unknown
-
2013
- 2013-06-10 WO PCT/EP2013/061895 patent/WO2013186160A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090323121A1 (en) * | 2005-09-09 | 2009-12-31 | Robert Jan Valkenburg | A 3D Scene Scanner and a Position and Orientation System |
Non-Patent Citations (1)
Title |
---|
WANG J M ET AL: "Video stabilization for a hand-held camera based on 3D motion model", IMAGE PROCESSING (ICIP), 2009 16TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 7 November 2009 (2009-11-07), pages 3477 - 3480, XP031628495, ISBN: 978-1-4244-5653-6 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9684078B2 (en) | 2010-05-10 | 2017-06-20 | Faro Technologies, Inc. | Method for optically scanning and measuring an environment |
US9983590B2 (en) * | 2012-09-26 | 2018-05-29 | Waymo Llc | Wide-view LIDAR with areas of special attention |
US20160274589A1 (en) * | 2012-09-26 | 2016-09-22 | Google Inc. | Wide-View LIDAR With Areas of Special Attention |
US12093052B2 (en) | 2012-09-26 | 2024-09-17 | Waymo Llc | Wide-view LIDAR with areas of special attention |
US11467595B2 (en) | 2012-09-26 | 2022-10-11 | Waymo Llc | Wide-view LIDAR with areas of special attention |
US11402845B2 (en) | 2012-09-26 | 2022-08-02 | Waymo Llc | Wide-view LIDAR with areas of special attention |
US11126192B2 (en) | 2012-09-26 | 2021-09-21 | Waymo Llc | Wide-view LIDAR with areas of special attention |
US10871779B2 (en) | 2012-09-26 | 2020-12-22 | Waymo Llc | Wide-view LIDAR with areas of special attention |
US9746559B2 (en) | 2012-10-05 | 2017-08-29 | Faro Technologies, Inc. | Using two-dimensional camera images to speed registration of three-dimensional scans |
US10067231B2 (en) | 2012-10-05 | 2018-09-04 | Faro Technologies, Inc. | Registration calculation of three-dimensional scanner data performed between scans based on measurements by two-dimensional scanner |
US11112501B2 (en) | 2012-10-05 | 2021-09-07 | Faro Technologies, Inc. | Using a two-dimensional scanner to speed registration of three-dimensional scan data |
US9618620B2 (en) | 2012-10-05 | 2017-04-11 | Faro Technologies, Inc. | Using depth-camera images to speed registration of three-dimensional scans |
US11815600B2 (en) | 2012-10-05 | 2023-11-14 | Faro Technologies, Inc. | Using a two-dimensional scanner to speed registration of three-dimensional scan data |
US9739886B2 (en) | 2012-10-05 | 2017-08-22 | Faro Technologies, Inc. | Using a two-dimensional scanner to speed registration of three-dimensional scan data |
US10203413B2 (en) | 2012-10-05 | 2019-02-12 | Faro Technologies, Inc. | Using a two-dimensional scanner to speed registration of three-dimensional scan data |
US9602811B2 (en) | 2014-09-10 | 2017-03-21 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device |
US10088296B2 (en) | 2014-09-10 | 2018-10-02 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device |
GB2544934A (en) * | 2014-09-10 | 2017-05-31 | Faro Tech Inc | A device and method for optically scanning and measuring an environment and a method of control |
US10070116B2 (en) | 2014-09-10 | 2018-09-04 | Faro Technologies, Inc. | Device and method for optically scanning and measuring an environment |
US9915521B2 (en) | 2014-09-10 | 2018-03-13 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device |
US10401143B2 (en) | 2014-09-10 | 2019-09-03 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device |
US10499040B2 (en) | 2014-09-10 | 2019-12-03 | Faro Technologies, Inc. | Device and method for optically scanning and measuring an environment and a method of control |
GB2544934B (en) * | 2014-09-10 | 2019-12-04 | Faro Tech Inc | A device and method for optically scanning and measuring an environment and a method of control |
WO2016039868A1 (en) * | 2014-09-10 | 2016-03-17 | Faro Technologies, Inc. | A device and method for optically scanning and measuring an environment and a method of control |
US9693040B2 (en) | 2014-09-10 | 2017-06-27 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device |
US9879975B2 (en) | 2014-09-10 | 2018-01-30 | Faro Technologies, Inc. | Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device |
US9769463B2 (en) | 2014-09-10 | 2017-09-19 | Faro Technologies, Inc. | Device and method for optically scanning and measuring an environment and a method of control |
US9671221B2 (en) | 2014-09-10 | 2017-06-06 | Faro Technologies, Inc. | Portable device for optically measuring three-dimensional coordinates |
US10528840B2 (en) | 2015-06-24 | 2020-01-07 | Stryker Corporation | Method and system for surgical instrumentation setup and user preferences |
US11367304B2 (en) | 2015-06-24 | 2022-06-21 | Stryker Corporation | Method and system for surgical instrumentation setup and user preferences |
US12374070B2 (en) | 2015-06-24 | 2025-07-29 | Stryker Corporation | Method and system for surgical instrumentation setup and user preferences |
US11103664B2 (en) | 2015-11-25 | 2021-08-31 | ResMed Pty Ltd | Methods and systems for providing interface components for respiratory therapy |
US11791042B2 (en) | 2015-11-25 | 2023-10-17 | ResMed Pty Ltd | Methods and systems for providing interface components for respiratory therapy |
US10220172B2 (en) | 2015-11-25 | 2019-03-05 | Resmed Limited | Methods and systems for providing interface components for respiratory therapy |
US10175037B2 (en) | 2015-12-27 | 2019-01-08 | Faro Technologies, Inc. | 3-D measuring device with battery pack |
US11350077B2 (en) | 2018-07-03 | 2022-05-31 | Faro Technologies, Inc. | Handheld three dimensional scanner with an autoaperture |
CN114127579A (en) * | 2019-07-16 | 2022-03-01 | 博迪戴特公司 | System and method for improving radar scan coverage and efficiency |
JP2022550495A (en) * | 2020-06-30 | 2022-12-02 | シャンハイ センスタイム インテリジェント テクノロジー カンパニー リミテッド | Data processing method, device, equipment, storage medium and program |
WO2022153653A1 (en) * | 2021-01-13 | 2022-07-21 | パナソニックIpマネジメント株式会社 | Distance-measuring system |
Also Published As
Publication number | Publication date |
---|---|
IE86364B1 (en) | 2014-03-26 |
IE20120272A1 (en) | 2013-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013186160A1 (en) | Closed loop 3d video scanner for generation of textured 3d point cloud | |
US9689972B2 (en) | Scanner display | |
Golparvar-Fard et al. | Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques | |
US12165255B2 (en) | Estimating dimensions of geo-referenced ground- level imagery using orthogonal imagery | |
EP2976748B1 (en) | Image-based 3d panorama | |
US10157478B2 (en) | Enabling use of three-dimensional locations of features with two-dimensional images | |
JP6057298B2 (en) | Rapid 3D modeling | |
US12307617B2 (en) | Searchable object location information | |
JP6310149B2 (en) | Image generation apparatus, image generation system, and image generation method | |
WO2022078442A1 (en) | Method for 3d information acquisition based on fusion of optical scanning and smart vision | |
CN105096382A (en) | Method and apparatus for associating actual object information in video monitoring image | |
KR20180101746A (en) | Method, electronic device and system for providing augmented reality contents | |
Kalantari et al. | Accuracy and utility of the Structure Sensor for collecting 3D indoor information | |
EP3161412B1 (en) | Indexing method and system | |
KR20220085142A (en) | Intelligent construction site management supporting system and method based extended reality | |
KR100757751B1 (en) | Apparatus and Method for Generating Environment Map of Indoor Environment | |
KR20220085150A (en) | Intelligent construction site management supporting system server and method based extended reality | |
Green et al. | Mining robotics sensors | |
US20210256177A1 (en) | System and method for creating a 2D floor plan using 3D pictures | |
KR200488998Y1 (en) | Apparatus for constructing indoor map | |
JP6575003B2 (en) | Information processing apparatus, information processing method, and program | |
KR101902131B1 (en) | System for producing simulation panoramic indoor images | |
KR20150028533A (en) | Apparatus for gathering indoor space information | |
Hairuddin et al. | Development of a 3d cadastre augmented reality and visualization in malaysia | |
Ulvi et al. | A New Technology for Documentation Cultural Heritage with Fast, Practical, and Cost-Effective Methods IPAD Pro LiDAR and Data Fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13728370 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13728370 Country of ref document: EP Kind code of ref document: A1 |