WO2024204029A1 - 測量方法、測量システムおよびプログラム - Google Patents
測量方法、測量システムおよびプログラム Download PDFInfo
- Publication number
- WO2024204029A1 WO2024204029A1 PCT/JP2024/011633 JP2024011633W WO2024204029A1 WO 2024204029 A1 WO2024204029 A1 WO 2024204029A1 JP 2024011633 W JP2024011633 W JP 2024011633W WO 2024204029 A1 WO2024204029 A1 WO 2024204029A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- time
- coordinate system
- calculation
- reflecting prism
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/36—Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
Definitions
- the present invention relates to surveying technology.
- Patent Document 1 Technology for measuring the position of heavy machinery using surveying equipment is known (see, for example, Patent Document 1).
- the present invention aims to improve the accuracy of three-dimensional data based on images captured by a camera mounted on a moving object.
- the present invention is a surveying method in which image data of an image captured by a moving camera is obtained, the position of a specific part fixed relative to the camera is measured, and the position of the camera is calculated based on the measured position of the specific part, there is a discrepancy between the measurement time of the position of the specific part and the shooting time of the captured image, and an adjustment calculation is made in which the position of the camera is an unknown quantity in the calculation of the position of the camera, and the adjustment calculation is made based on the position of the specific part to which a correction amount corresponding to the time discrepancy has been added.
- a process for determining an initial value for the adjustment calculation is performed, and the process for determining the initial value includes a process for determining a correspondence between the position of the camera in a local coordinate system calculated based on an image captured by the camera and the start and/or end of the movement of the position of the specific part in a global coordinate system.
- a process for determining an initial value for the adjustment calculation is performed, and the process for determining the initial value includes determining a correspondence between a change in the position of the camera in a local coordinate system calculated based on an image captured by the camera, and a change in the position of the specific part in a global coordinate system.
- the change in position may be evaluated based on the angle between a first line connecting the first position and the second position and a second line connecting the second position and the third position.
- the present invention is a surveying system that has a means for obtaining image data of an image captured by a moving camera, a means for measuring the position of a specific part fixed relative to the camera, and a means for calculating the position of the camera based on the measured position of the specific part, and there is a discrepancy between the time when the position of the specific part is measured and the time when the image is captured, and an adjustment calculation is made in which the position of the camera is an unknown quantity, and the adjustment calculation is made based on the position of the specific part to which a correction amount corresponding to the discrepancy in time has been added.
- the present invention is a program that is read and executed by a computer, which causes the computer to acquire image data of an image captured by a moving camera, acquire measurement data of the position of a specific part fixed relative to the camera, and calculate the position of the camera based on the measured position of the specific part, where there is a discrepancy between the measurement time of the position of the specific part and the shooting time of the captured image, and an adjustment calculation is made in which the position of the camera is an unknown quantity, and the adjustment calculation is made based on the position of the specific part to which a correction amount corresponding to the time discrepancy has been added.
- the present invention can improve the accuracy of three-dimensional data based on images captured by a camera mounted on a moving object.
- FIG. 1 is a conceptual diagram of an embodiment.
- FIG. 13 is a conceptual diagram showing the relationship between the camera position and the prism position.
- FIG. 2 is a block diagram of a computing device.
- FIG. 1 is a diagram illustrating the principle of initial value matching.
- 11 is a flowchart illustrating an example of a processing procedure.
- FIG. 1 shows a heavy machine 100 performing civil engineering work and a total station 200 which is a surveying device for measuring the position of the heavy machine 100.
- a heavy machine 100 performing civil engineering work
- a total station 200 which is a surveying device for measuring the position of the heavy machine 100.
- an image is captured by a camera 101 mounted thereon, and 3D data of the surroundings of the heavy machine 100 is acquired by the principle of sfm (structure from motion).
- the total station 200 repeatedly and continuously measures the position of a reflecting prism 102 mounted on the heavy machine 100 while tracking the same. Based on the measured value of the position of the reflecting prism 102, the 3D data is acquired as data in a global coordinate system.
- the principle of sfm is described in, for example, Japanese Patent Application Laid-Open No. 2013-186816.
- the global coordinate system is a coordinate system used in maps and GNSS.
- the heavy equipment 100 is a power shovel.
- the power shovel is one example, and the type of heavy equipment is not particularly limited as long as it is used for civil engineering work.
- the heavy equipment 100 includes a base unit 120 that travels on the ground using caterpillar tracks, and a rotating unit 110 that rotates horizontally on the base unit 120.
- the rotating unit 110 includes a driver's seat and an arm 151, and a bucket 152 is disposed at the tip of the arm 151. These structures are the same as those of a normal power shovel.
- a camera 101 and a reflecting prism 102 are fixedly disposed on top of the rotating unit 110.
- the position of the camera 101 and/or the position of the reflecting prism 102 on the rotating unit 110 may be known or unknown. Furthermore, even if the position is known, it may be approximate and may include an error. If the position of the camera 101 and/or the position of the reflecting prism 102 is unknown or includes an error, the position is optimized during adjustment calculations.
- the camera 101 is a digital still camera that continuously and repeatedly captures still images. While the heavy equipment 100 is in operation, the heavy equipment 100 travels and the rotating part 110 rotates, so the camera 101 faces in various directions. Therefore, the camera 101 captures images of the surroundings of the heavy equipment 100 while the heavy equipment 100 is in operation.
- a depth camera can also be used as the camera 101. It is also possible to use a camera for capturing video as the camera 101, and use frame images that make up the video as still images.
- Figure 1 shows an example in which one camera 101 is placed on the heavy equipment 100, but it is also possible to place multiple cameras facing in multiple directions. A stereo camera can also be used.
- Reflecting prism 102 is an optical reflective target used in surveying using laser light.
- a full-circle reflecting prism is used as reflecting prism 102.
- Reflecting prism 102 reflects incident light by changing its direction by 180 degrees.
- As an optical reflective target in addition to reflecting prisms, reflective targets with retroreflective properties can also be used.
- the position of the heavy equipment 100 is determined by the position of the center of gravity of the heavy equipment. It is also possible to determine the position of the heavy equipment 100 by the position of the camera 101 or the reflecting prism 102, a point on the central axis of rotation of the rotating part 110, or any other position on the heavy equipment 100.
- the heavy equipment 100 is equipped with a calculation unit 300.
- the calculation unit 300 is a computer.
- the calculation unit 300 can also be provided outside the heavy equipment 100.
- the calculation unit 300 performs processing related to creating a three-dimensional model of the subject based on the image captured by the camera 101. This processing includes processing related to calculating the position of the camera 101. The configuration and processing of the calculation unit 300 will be described in detail later.
- the total station 200 is an example of a surveying device capable of measuring a position.
- the total station 200 includes a positioning function using laser light, a camera, a clock, a storage device for surveying data, a communication interface, a user interface, a function for searching for a surveying target (the reflecting prism 102), and a function for tracking the surveying target even if it moves.
- a commonly available model can be used as the total station 200.
- the position and orientation of the total station 200 in the global coordinate system are acquired and made known data.
- the position of the total station 200 is understood as the position of the optical origin of the optical system used for distance measurement.
- the heavy equipment 100 begins operation with the total station 200 aiming and locking the reflecting prism 102. While the heavy equipment 100 is operating, the total station 200 tracks the reflecting prism 102 and repeatedly measures its position. The measurement interval is approximately 0.05 seconds (20 Hz) to 5 seconds (0.2 Hz).
- 3D data of the subject is created based on images captured by the camera 101 mounted on the heavy equipment 100.
- the camera 101 repeatedly captures still images.
- the position and orientation of the camera 101 change while the heavy equipment 100 is in operation, and the capture interval of the camera 101 is determined so that adjacent captured images overlap on the time axis.
- the specific capture interval is set to about several Hz to 30 Hz.
- a three-dimensional model of the subject is created using sfm based on images captured by the camera while the heavy equipment 101 is in operation, by the following process.
- Relative Orientation A relative three-dimensional model of any scale is created based on images captured by the camera 101, and the relative relationship between feature points identified between multiple images and the position and orientation of the camera 101 when each image was captured is identified. Specifically, feature points common to the stereo images are extracted, and the positions and orientations of the cameras at multiple viewpoints (capture positions) and the relative relationship between the feature points are identified.
- 3D model of the target is created based on the point cloud data optimized by the above adjustment calculation.
- the 3D model include a digitalized contour line of the target, a DEM (Digital Elevation Model), a TIN (Triangulated Irregular Network), etc. Creation of a 3D model based on point cloud data is described in, for example, WO2011/070927, JP2012-230594A, and JP2014-35702A.
- the clock of the control system that controls the shooting of the camera 101 (the function of specifying the time for processing) is not synchronized with the clock of the control system that controls the operation related to the measurement of the position of the total station 200.
- this situation may occur when costs are kept low or a simple configuration is adopted.
- Figure 2 shows the relationship between the position of the reflecting prism 102 and the position of the camera 101 (position at the time of photographing).
- the dashed circle (p ti , T pti ) indicating the position of the reflecting prism 102 and its measurement time is the position and time of the reflecting prism 102 when taking into account the time series (taking into account the deviation in time synchronization). This is the position and time of the reflecting prism 102 that is predicted to be measured if the time is synchronized.
- the solid circle (p i , T pi ) is the position and time of the reflecting prism 102 that is actually measured
- the dashed circle (p ti , T pti ) is the position and time of the reflecting prism 102 that is predicted to be measured if synchronization is achieved.
- ⁇ p ′ ti Position of the reflecting prism 102 in the local coordinate system taking into account the time series L′: Offset amount between the position of the camera 101 and the position of the reflecting prism 102 t′: Position of the camera 101 in the local coordinate system
- -p' ti is a function of the position of the reflecting prism 102 in the global coordinate system measured by the total station 200, the shooting time of the camera 101, and the positioning time of the reflecting prism 102, and includes a correction term proportional to the difference between the two times.
- This correction term determines the position p ti of the dashed circle based on the position p i of the solid circle in Figure 2 (the actually measured position of the reflecting prism 102).
- (-p' ti -L') is the camera position on the local coordinate system taking into account the time series of synchronization deviation calculated from the measurement value of the position of the reflecting prism 102 by the total station 200.
- the local coordinate system is a coordinate system that describes the position and orientation of the camera 101 and the positional relationship between the feature points extracted from the image captured by the camera 101.
- the above constraint condition equation is formulated in the local coordinate system, and is an equation for calculating the sum of the difference between the camera position (-p' ti -L') taking into account the time series of synchronization deviation calculated from the measurement value of the position of the reflecting prism 102 by the total station 200 and the camera position t' i , which is the final unknown quantity, for each camera position.
- the position -p' ti of the reflecting prism 102 taking into account the time series will be described below.
- the position of the i-th reflecting prism 102 measured by the total station 200 is slightly shifted from the position of the reflecting prism 102 at the time of capturing the corresponding image by the camera 101. This shift is caused by the asynchronous time information (clock) used.
- This deviation is considered to be smaller than the difference between the i-th position and the i+1-th position of the reflecting prism 102 measured by the total station 200, as shown in FIG. 2. Since the difference between the i-th position and the i+1-th position of the reflecting prism 102 is minute, this deviation is considered to correspond to the difference between the time when the reflecting prism 102 is positioned and the time when the camera 101 captures the image.
- the reflecting prism 102 at the time of shooting is located at a position that has moved by the above-mentioned amount of deviation from the i-th position of the reflecting prism 102 measured by the total station 200 in the direction of the i+1-th position.
- This amount of deviation is defined as a correction term that is proportional to the difference between the time of positioning the reflecting prism 102 and the time of shooting by the camera 101.
- the prism position at the corresponding time of shooting is located at a position shifted from the i-th prism position measured by the total station 200 toward the i+1-th prism position by the amount of this correction term.
- the orientation and position of the camera 101 in the global coordinate system are obtained by a bundle adjustment calculation using this constraint condition equation.
- the bundle adjustment calculation based on the collinearity condition that a light beam (bundle) connecting three points, namely, a feature point obtained from a photographed image of the subject, a point on the photographed image, and the projection center, must be on the same straight line, an observation equation of the following formula 2 is established for each light beam of each image, and the coordinates of the feature points (Xj, Yj, Zj) and the position and orientation parameters of the camera 101 ( Xoi , Yoi , Zoi , a11i to a33i ) are simultaneously adjusted by the least squares method.
- each parameter feature points (Xj, Yj, Zj) and position ( Xoi , Yoi , Zoi ) and orientation ( a11i to a33i (rotation matrix)) of the camera 101) is optimized by the least squares method.
- c Screen distance (focal length) (Xj, Yj, Zj): Three-dimensional coordinates of a feature point of interest (x ij , y ij ): Coordinates on the image (on the screen) of point j on image i (X oi , Y oi , Z oi ): Position of the camera 101 when photograph i was taken (a 11i to a 33i ): rotation matrix indicating the orientation of the camera 101 when photograph i was taken
- the initial values of (Xj, Yj, Zj) use the three-dimensional coordinates of the feature points in the local coordinate system obtained by sfm.
- ( Xoi , Yoi , Zoi ) and ( a11i to a33i ) are unknowns, and are obtained by adjustment calculation using formulas 1 and 2.
- ( Xoi , Yoi , Zoi ) and ( a11i to a33i ) are unknowns, the results of the initial value adjustment described below are used as the initial values.
- the initial values of (Xj, Yj, Zj) also contain errors, but are close to the true values in the global coordinate system to some extent. Due to these circumstances, the convergence and accuracy of the adjustment calculation are improved.
- the position of the reflecting prism 102 measured by the total station 200 is used.
- Equation 1 and Equation 2 the residuals of Equation 1 and Equation 2 are calculated using the feature points (Xj, Yj, Zj), exterior orientation parameters ( Xoi , Yoi , Zoi , a11i to a33i (rotation matrix indicating the orientation)), and ( LX , LY , LZ ) as parameters.
- a combination of (Xj, Yj, Zj), ( Xoi , Yoi , Zoi , a11i to a33i ), and ( LX , LY, LZ ) that converges the residuals is searched for by the least squares method.
- ( LX , LY , LZ ) are the offset amounts in the X, Y , and Z directions between the position of the camera 101 and the position of the reflecting prism 102.
- Equation 1 and Equation 2 a correction amount is added to each parameter (Xj, Yj, Zj), ( Xoi , Yoi, Zoi , a11i to a33i ), ( LX, LY, LZ), and simultaneous calculation of Equation 1 and Equation 2 is repeated so as to reduce the residuals shown in Equation 1 and Equation 2. Then, a combination of unknown parameters (Xj, Yj, Zj), (Xoi, Yoi, Zoi, a11i to a33i), (LX , LY , LZ ) that satisfies the convergence conditions of Equation 1 and Equation 2 is obtained. As the convergence conditions, a sufficiently small residual and a sufficiently small variation in the residual from the previous calculation (a state in which the variation in the calculation result has converged) are used.
- the calculation unit 300 is a computer and includes a CPU, a storage device, and various interfaces.
- the calculation unit 300 includes an image data acquisition unit 301, a reflecting prism position acquisition unit 302, a relative orientation unit 303, an initial value adjustment unit 304, an adjustment calculation unit 305 taking into account a time series, a three-dimensional model creation unit 306, and a camera control unit 307.
- These functional units are realized in software by the computer that constitutes the calculation unit 300. Some or all of the above functional units can also be configured with dedicated hardware. It is also possible for the calculation unit 300 to be provided outside the heavy equipment 100. For example, it is also possible for the calculation unit 300 to be provided on a processing server connected to the Internet, and for calculations to be performed there.
- the image data acquisition unit 301 accepts image data of an image captured by the camera 101.
- the image data is associated with the time of capture.
- the time of capture is the time of a clock provided in the camera control unit 309.
- the reflecting prism position acquisition unit 302 accepts data on the position of the reflecting prism 102 measured by the total station 200.
- the time of measurement of this data is the time of a clock provided in the total station 200.
- the position and orientation of the total station 200 in the global coordinate system are known, and the position data of the reflecting prism 102 is obtained on the global coordinate system.
- the relative orientation unit 303 calculates the relationship between the numerous feature points extracted from the image captured by the camera 101 and the position and orientation of the camera 101 according to the SFMS principle. This relationship is specified on a local coordinate system. Below, the principle of this process will be briefly explained. Now, assume that there is a captured image 1 captured by the camera 101 at a first position, and a captured image 2 captured from a second position slightly shifted from the first position. Also assume that the captured ranges of captured image 1 and captured image 2 overlap. The numerous feature points extracted from this overlapping range, the positional relationship between the first position and the second position, and the relationship between the camera postures at the first position and the second position are determined according to the principle of three-dimensional photogrammetry (stereo photogrammetry). This is the relative orientation process.
- This process determines the position and orientation of the camera 101 at each shooting viewpoint position in the local coordinate system. If no scale is given, the local coordinate system will be a coordinate system that describes the relative positional relationship with an unknown scale. It is not necessary to give a scale, but if the scale is reflected in the captured image, the scale will be given to the local coordinate system.
- the initial value adjustment unit 304 determines the initial values of the correspondence between the position and time of the camera 101 described in the local coordinate system and the position and time of the reflecting prism 102 described in the global coordinate system. The adjustment calculation described above is performed using these initial values. This initial value adjustment can be said to be a process of determining the approximate relationship between the local coordinate system and the global coordinate system used in the relative orientation described above.
- the initial value adjustment is performed by the following three-stage processing.
- the first stage processing attention is paid to the start of the movement of the position of the camera 101 and the start of the movement of the position of the reflecting prism 102, and the correspondence between the two positions is identified.
- the correspondence between the timing at which the movement starts and/or ends is found between the position (position in the global coordinate system) measured by the total station 200 of the reflecting prism 102 and the calculated position (position in the local coordinate system) of the camera 101 based on the image captured by the camera 101.
- the second stage of processing is carried out after the first stage of processing.
- processing is carried out focusing on changes in position.
- the principle is shown in Figure 4 (A).
- the circles indicate the positions of the camera 101 or the reflecting prism 102.
- the angles between three consecutive points are calculated for all points (all camera positions and reflecting prism positions).
- the angles on the camera side and the angles on the reflecting prism side are compared, and a search is made for the combination that minimizes the sum of the absolute values of the differences.
- angles relative to the position of the camera 101 are ⁇ c1 , ⁇ c2 , ⁇ c3 , ⁇ c4 , ... ⁇ cn
- angles relative to the position of the reflecting prism 102 are ⁇ p1 , ⁇ p2 , ⁇ p3 , ⁇ p4, ... ⁇ pn
- the difference between the angles ⁇ c1 , ⁇ c2, ⁇ c3, ⁇ c4, ... ⁇ cn and ⁇ p1 , ⁇ p2 , ⁇ p3 , ⁇ p4 , ... ⁇ pn is calculated, and the sum of the absolute values is calculated.
- the combination position at which this sum is smallest is adopted as the corresponding combination.
- the change in the position of the reflecting prism 102 in the global coordinate system is compared with the change in the position of the camera 101 in the local coordinate system, and the correspondence between the two is identified.
- initial value adjustment can be performed with even higher accuracy than in the first stage.
- Figure 4 (C) shows a case where the calculation interval for the position of the camera 101 is 1 second (1 Hz), and the measurement interval for the position of the reflecting prism 102 is 0.2 seconds (5 Hz).
- the measurement points for the position of the reflecting prism 102 are acquired at a rate of one point for every five points (i.e., the sampling frequency is 1/5), to match the calculation interval for the position of the camera 101. Aligning the measurement intervals for this position makes it easier to make comparisons. This method is also effective in the first stage processing.
- the third stage of processing is carried out after the second stage of processing.
- the correspondence between the camera position and the reflecting prism position is found using the least squares method.
- a transformation matrix from the local coordinate system to the global coordinate system is found.
- the local coordinate system feature points, camera position and orientation
- the transformation matrix found here and the results of the transformation using this transformation matrix are used as the initial values for the next adjustment calculation.
- the above three-stage alignment process obtains the initial values for the relationship between the local coordinate system, which describes the relationship between the camera position and feature points of each captured image obtained by relative orientation, and the global coordinate system.
- the time series-considered adjustment calculation unit 305 performs the adjustment calculation using Equation 1 and Equation 2 described in relation to FIG. 2. This adjustment calculation is performed using the result of the initial value adjustment described above.
- the three-dimensional model creation unit 306 creates a three-dimensional model of the subject to be photographed based on point cloud data, which is data on the positions of each feature point in the global coordinate system obtained as a result of the adjustment calculation.
- point cloud data is data on the positions of each feature point in the global coordinate system obtained as a result of the adjustment calculation.
- a three-dimensional model of the terrain around the heavy equipment 100 photographed while the heavy equipment 100 is working is created. By tracking changes in this three-dimensional model over time, the content of the work performed by the heavy equipment 100 (changes in the terrain) can be digitized.
- the camera control unit 307 controls the timing of image capture by the camera 101. If the camera 101 itself has a function for performing this control, the camera control unit 307 outputs a signal that instructs the camera to start and stop image capture.
- FIG. 5 is a flowchart showing an example of a procedure of processing performed in the arithmetic device 300 of Fig. 3.
- a program for executing the processing of Fig. 5 is stored in a storage device or an appropriate storage medium of a computer constituting the arithmetic device 300, and is executed by the computer.
- the camera 101 and reflecting prism 102 are fixed to the heavy equipment 100, and the distance L between the camera 101 and the reflecting prism 102 is known. L does not have to be accurate. Also, L may be unknown. In this case, L can be found by adjustment calculations.
- the positions of the camera 101 and the reflecting prism 102 on the heavy equipment 100 are unknown. Furthermore, while the heavy equipment 100 is in operation, the camera 101 repeatedly takes images. The total station 200 continuously and repeatedly measures the position of the reflecting prism 102. The position and orientation of the total station 200 in the global coordinate system are known. Furthermore, the time when the camera takes an image and the time when the total station measures the position of the reflecting prism 101 are not synchronized.
- the processing in FIG. 5 is performed after the operation of the heavy equipment 100 has ended.
- the processing in FIG. 5 may also be performed while the heavy equipment 100 is in operation.
- batch processing in which the processing in FIG. 4 is performed every 5 or 10 minutes is also possible. It is also possible to perform the processing in FIG. 5 in real time.
- step S101 image data captured by the camera 101 is acquired (step S101). This process is performed in the image data acquisition unit 301 in FIG. 3.
- step S102 data on the position of the reflecting prism 102 measured by the total station 200 is acquired (step S102). This process is performed in the reflecting prism position acquisition unit 302 in FIG. 3.
- step S103 the positions of the feature points in the local coordinate system based on the image captured by the camera and the position and orientation of the camera 101 are calculated according to the sfm principle. This process is performed by the relative orientation unit 303 in FIG. 3.
- the local coordinate system is assigned a scale, but this is not required. If the captured image contains multiple reference points whose positions are identified on the global coordinate system, the local coordinate system can be treated as a global coordinate system, but this is not required.
- step S103 the positions in the local coordinate system of the numerous feature points extracted from the captured image are obtained. If the scale and reference point are not given, step S103 obtains the camera position and orientation (which are equal to the number of captured images obtained while the camera was moving) and the relative three-dimensional relationship between the numerous feature points.
- step S104 initial value adjustment is performed (step S104). This process is performed in the initial value adjustment unit 304 in FIG. 3. By performing this initial value adjustment, the approximate relationship between the position and time of the camera 101 on the local coordinate system and the position and time of the camera 101 on the global coordinate system is given.
- an adjustment calculation (adjustment calculation taking into account the time series) is performed using Equations 1 and 2 (step S105).
- This adjustment calculation takes into account the difference between the time on the camera 101 side and the time on the total station 200 side.
- This adjustment calculation identifies the position and orientation of the camera 101 in the global coordinate system, and the positions in the global coordinate system of many feature points obtained from the captured image obtained in step S103.
- This process is performed by the adjustment calculation unit 305 taking into account the time series in Figure 3.
- a three-dimensional model is created based on the feature points obtained from the captured image (step S106). This process is performed by the three-dimensional model creation unit 306 in FIG. 3. In this way, a three-dimensional model of the object captured by the camera 101 mounted on the heavy equipment 100 is obtained on the global coordinate system.
- a GNSS position measuring device may be mounted on the heavy equipment 100, and the position of the camera 101 on the heavy equipment may be identified using the GNSS position measuring device.
- the reflecting prism and the total station may be unnecessary (of course, they may be used together).
- Measurement of the position using the GNSS includes errors, but the GNSS errors can be suppressed by adjustment calculation.
- the adjustment calculation the calculation is performed assuming that an antenna of the GNSS position measuring device is installed instead of the reflecting prism 102 in FIG. 1.
- measurement errors can be suppressed by using relative positioning such as the RTK method.
- the handling of feature points, various orientations, and other data processing are the same as those in the first embodiment.
- 100 heavy machinery
- 101 camera
- 102 reflecting prism
- 110 rotating part
- 120 base part
- 151 arm part
- 152 bucket
- 200 total station
- 300 computing device.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
(概要)
図1には、土木作業を行う重機100、重機100の位置の測定を行う測量装置であるトータルステーション200が示されている。重機100の稼働中に、搭載するカメラ101による撮影を行い、sfm(Structure from Motion)の原理により、重機100周囲の3Dデータを取得する。この際、トータルステーション200は、重機100に搭載した反射プリズム102を追尾しつつ繰り返し継続してその位置の測定を行う。この反射プリズム102の位置の測定値に基づいて、上記3Dデータをグローバル座標系におけるデータとして得る。sfmの原理については、例えば特開2013-186816号公報に記載されている。グローバル座標系は、地図やGNSSで用いられる座標系である。
重機100は、パワーシャベルである。パワーシャベルは一例であり、土木作業を行う重機であれば、重機の種類は特に限定されない。重機100は、無限軌道により地上を走行するベース部120と、ベース部120上で水平回転する回転部110を備えている。回転部110は、運転席とアーム151を備え、アーム151の先端にはバケット152が配置されている。これらの構造は、通常のパワーシャベルと同じである。
トータルステーション200は、位置の測定が可能な測量装置の一例である。トータルステーション200は、レーザー光を用いた測位機能、カメラ、時計、測量したデータの記憶装置、通信インターフェース、ユーザーインターフェース、測量対象(反射プリズム102)を探索する機能および測量対象が移動してもそれを追尾する機能を備える。トータルステーション200は、一般的に入手できる機種を利用できる。
以下の原理により、重機100に搭載したカメラ101が撮影した画像に基づく、撮影対象の3Dデータの作成が行われる。カメラ101は、繰り返し連続して静止画像の撮影を行う。重機100の稼働中にカメラ101の位置と姿勢は変化するが、その際に時間軸上で隣接する撮影画像が重複するように、カメラ101の撮影間隔は決定される。具体的な撮影の間隔は、数Hz~30Hz程度とする。基本的には、下記の処理により、重機101の稼働中にカメラが撮影した画像に基づくsfmを利用した撮影対象の三次元モデルの作成が行われる。
カメラ101が撮影した撮影画像に基づく任意スケールの相対三次元モデルの作成を行い、複数の画像間で特定された特徴点と各画像の撮影時におけるカメラ101の位置および姿勢の相対関係を特定する。具体的には、ステレオ画像間で共通する特徴点を抽出し、複数の視点(撮影位置)におけるカメラの位置と姿勢、および特徴点の相対関係を特定する。
バンドル調整計算および反射プリズム102とカメラ101の位置関係と測定時刻のズレを考慮した調整計算を同時に行い、グローバル座標系におけるカメラ101の位置と姿勢および特徴点の位置の最適化を行う。これにより、グローバル座標系上での特徴点の位置が記述された点群データが得られる。
上記調整計算により最適化された点群データに基づき、対象の3Dモデルを作成する。3Dモデルとしては、対象の輪郭線をデータ化したもの、DEM(Digital Elevation Model)、TIN(Triangulated Irregular Network)等が挙げられる。点群データに基づく3Dモデルの作成については、例えば、WO2011/070927号公報、特開2012-230594号公報、特開2014-35702号公報に記載されている。
この例では、カメラ101の撮影を制御する制御系の時計(処理を行うための時刻の特定機能)と、トータルステーション200の位置の測定に係る動作を制御する制御系の時計が同期していない。例えば、コストを抑えた場合や簡便な構成などを採用した場合、この状況があり得る。
(Xj,Yj,Zj):着目した特徴点の三次元座標
(xij,yij):画像i上における点jの画像上(画面上)の座標
(Xoi,Yoi,Zoi):写真iの撮影時におけるカメラ101の位置
(a11i~a33i):写真iの撮影時におけるカメラ101の姿勢を示す回転行列
図3は、演算部300のブロック図である。演算部300は、コンピュータであり、CPU、記憶装置、各種のインターフェースを備えている。演算部300は、画像データ取得部301、反射プリズムの位置取得部302、相互標定部303、初期値合わせ部304、時系列を考慮した調整計算部305、三次元モデル作成部306、カメラ制御部307を備える。
図5は、図3の演算装置300において行われる処理の手順の一例を示すフローチャートである。図5の処理を実行するプログラムは、演算装置300を構成するコンピュータの記憶装置や適当な記憶媒体に記憶され、当該コンピュータにより実行される。
重機100に搭載された状態で移動するカメラ101が撮影した撮影画像の画像データを得、カメラ101に対して固定された特定の部分である反射プリズム102の位置がトータルステーション200により測定され、測定された反射プリズム102の位置に基づきカメラ101の位置を算出し、反射プリズム102の位置の測定時刻とカメラ101の撮影時刻にはズレがあり、前記カメラ101の位置の算出では、カメラ101の位置を未知数とした調整計算が行われ、この調整計算は、前記時刻のズレに対応した補正量が加えられた反射プリズム101の測定位置に基づいて行われる。
本実施形態によれば、重機100に搭載したカメラ101の撮影画像に基づく撮影対象の三次元モデルの作成において、カメラ101の撮影時刻と反射プリズム102の測位時刻が同期していなくても、その影響による誤差を抑えることができる。このため、移動体に搭載したカメラの撮影画像に基づく三次元データの精度を高めることができる。
重機100にGNSS位置測定装置を搭載し、このGNSS位置測定装置を用いて重機上のカメラ101の位置を特定することもできる。この場合、反射プリズムとトータルステーションを不要とすることができる(勿論、併用も可能である)。GNSSを用いた位置の測定は誤差を含むが、調整計算によりGNSSの誤差を抑えることができる。調整計算では、図1の反射プリズム102の代わりに、GNSS位置測定装置のアンテナが設置されているとして計算が行われる。また、RTK法等の相対測位を用いることで、測定誤差を抑えることができる。特徴点の取り扱い、各種の標定、その他データの処理に関しては、第1の実施形態の内容と同じである。
Claims (6)
- 移動するカメラが撮影した撮影画像の画像データを得、
前記カメラに対して固定された特定の部分の位置が測定され、
前記特定の部分の前記測定された位置に基づき前記カメラの位置を算出し、
前記特定の部分の位置の測定時刻と前記撮影画像の撮影時刻にはズレがあり、
前記カメラの位置の算出では、前記カメラの位置を未知数とした調整計算が行われ、
前記調整計算は、前記時刻のズレに対応した補正量が加えられた前記特定の部分の位置に基づいて行われる測量方法。 - 前記調整計算の初期値を求める処理が行われ、
前記初期値を求める処理では、
前記カメラの撮影画像に基づき算出されたローカル座標系における前記カメラの位置と、前記特定の部分のグローバル座標系における位置の動き始めおよび/または動き終わりの対応関係を求める処理が行われる請求項1に記載の測量方法。 - 前記調整計算の初期値を求める処理が行われ、
前記初期値を求める処理では、
前記カメラの撮影画像に基づき算出されたローカル座標系における前記カメラの位置の変化と、前記特定の部分のグローバル座標系における位置の変化の対応関係が求められる請求項1に記載の測量方法。 - 前記位置の変化は、第1の位置と第2の位置を結ぶ第1の直線と、第2の位置と第3の位置を結ぶ第2の直線とのなす角度によって評価される請求項3に記載の測量方法。
- 移動するカメラが撮影した撮影画像の画像データを得る手段と、
前記カメラに対して固定された特定の部分の位置を測定する手段と、
前記特定の部分の前記測定された位置に基づき前記カメラの位置を算出する手段と
を有し、
前記特定の部分の位置の測定時刻と前記撮影画像の撮影時刻にはズレがあり、
前記カメラの位置の算出では、前記カメラの位置を未知数とした調整計算が行われ、
前記調整計算は、前記時刻のズレに対応した補正量が加えられた前記特定の部分の位置に基づいて行われる測量システム。 - コンピュータに読み取らせて実行させるプログラムであって、
コンピュータに
移動するカメラが撮影した撮影画像の画像データの取得と、
前記カメラに対して固定された特定の部分の位置の測定データの取得と、
前記特定の部分の前記測定された位置に基づき前記カメラの位置の算出と
を実行させ、
前記特定の部分の位置の測定時刻と前記撮影画像の撮影時刻にはズレがあり、
前記カメラの位置の算出では、前記カメラの位置を未知数とした調整計算が行われ、
前記調整計算は、前記時刻のズレに対応した補正量が加えられた前記特定の部分の位置に基づいて行われるプログラム。
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2023058216A JP2024145734A (ja) | 2023-03-31 | 2023-03-31 | 測量方法、測量システムおよびプログラム |
| JP2023-058216 | 2023-03-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024204029A1 true WO2024204029A1 (ja) | 2024-10-03 |
Family
ID=92905370
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2024/011633 Ceased WO2024204029A1 (ja) | 2023-03-31 | 2024-03-25 | 測量方法、測量システムおよびプログラム |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP2024145734A (ja) |
| WO (1) | WO2024204029A1 (ja) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009042175A (ja) * | 2007-08-10 | 2009-02-26 | Koishi:Kk | 施工位置測定システム及び丁張りレスシステム |
| WO2010001940A1 (ja) * | 2008-07-01 | 2010-01-07 | 株式会社トプコン | 位置測定方法、位置測定装置、およびプログラム |
| JP2019045425A (ja) * | 2017-09-06 | 2019-03-22 | 株式会社トプコン | 測量データ処理装置、測量データ処理方法および測量データ処理用プログラム |
| JP2023119313A (ja) * | 2022-02-16 | 2023-08-28 | マック株式会社 | 測定システム |
| WO2024057757A1 (ja) * | 2022-09-15 | 2024-03-21 | 株式会社トプコン | 演算装置、演算方法およびプログラム |
| WO2024062781A1 (ja) * | 2022-09-22 | 2024-03-28 | 株式会社トプコン | 演算装置、演算方法およびプログラム |
-
2023
- 2023-03-31 JP JP2023058216A patent/JP2024145734A/ja active Pending
-
2024
- 2024-03-25 WO PCT/JP2024/011633 patent/WO2024204029A1/ja not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009042175A (ja) * | 2007-08-10 | 2009-02-26 | Koishi:Kk | 施工位置測定システム及び丁張りレスシステム |
| WO2010001940A1 (ja) * | 2008-07-01 | 2010-01-07 | 株式会社トプコン | 位置測定方法、位置測定装置、およびプログラム |
| JP2019045425A (ja) * | 2017-09-06 | 2019-03-22 | 株式会社トプコン | 測量データ処理装置、測量データ処理方法および測量データ処理用プログラム |
| JP2023119313A (ja) * | 2022-02-16 | 2023-08-28 | マック株式会社 | 測定システム |
| WO2024057757A1 (ja) * | 2022-09-15 | 2024-03-21 | 株式会社トプコン | 演算装置、演算方法およびプログラム |
| WO2024062781A1 (ja) * | 2022-09-22 | 2024-03-28 | 株式会社トプコン | 演算装置、演算方法およびプログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2024145734A (ja) | 2024-10-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7037302B2 (ja) | 測量データ処理装置、測量データ処理方法および測量データ処理用プログラム | |
| CN110335295B (zh) | 一种基于tof相机的植物点云采集配准与优化方法 | |
| CN108171733B (zh) | 使两个或更多个三维3d点云配准的方法 | |
| JP5832341B2 (ja) | 動画処理装置、動画処理方法および動画処理用のプログラム | |
| CN1987353B (zh) | 位置数据内插法及位置检测传感器以及位置测定装置 | |
| US6782123B1 (en) | Method and device for mapping radiation sources | |
| CN113124883B (zh) | 基于3d全景相机的离线标点方法 | |
| CN110033489A (zh) | 一种车辆定位准确性的评估方法、装置及设备 | |
| JP7043283B2 (ja) | 無人航空機の設置台、測量方法、測量装置、測量システムおよびプログラム | |
| KR101252680B1 (ko) | 디지털 항공이미지를 이용한 영상도화시스템 | |
| CN111879354A (zh) | 一种无人机精细化测量系统 | |
| CN103106339A (zh) | 同步航空影像辅助的机载激光点云误差改正方法 | |
| CN104964673A (zh) | 一种可定位定姿的近景摄影测量系统和测量方法 | |
| CN113984019A (zh) | 测量系统、测量方法以及测量用程序 | |
| CN113724337A (zh) | 一种无需依赖云台角度的相机动态外参标定方法及装置 | |
| WO2024125004A1 (zh) | 无靶标四目立体视觉系统全局标定方法 | |
| WO2024057757A1 (ja) | 演算装置、演算方法およびプログラム | |
| EP4592639A1 (en) | Computing device, computing method, and program | |
| EP3475653B1 (en) | Integrating point cloud scans, image data, and total station data from a surveying instrument into one adjustment | |
| JP7672245B2 (ja) | 光学データ処理装置、光学データ処理方法および光学データ処理用プログラム | |
| WO2024204029A1 (ja) | 測量方法、測量システムおよびプログラム | |
| JP4809134B2 (ja) | 画像処理装置及びその処理方法 | |
| JP2024145771A (ja) | 重機の情報の取得方法、システムおよびプログラム | |
| TW202137138A (zh) | 含大地座標之3d點雲地圖的產生方法和3d點雲地圖產生系統 | |
| Shortis et al. | State of the art of 3D measurement systems for industrial and engineering applications |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24780147 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024780147 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2024780147 Country of ref document: EP Effective date: 20251031 |
|
| ENP | Entry into the national phase |
Ref document number: 2024780147 Country of ref document: EP Effective date: 20251031 |
|
| ENP | Entry into the national phase |
Ref document number: 2024780147 Country of ref document: EP Effective date: 20251031 |
|
| ENP | Entry into the national phase |
Ref document number: 2024780147 Country of ref document: EP Effective date: 20251031 |
|
| ENP | Entry into the national phase |
Ref document number: 2024780147 Country of ref document: EP Effective date: 20251031 |