WO2024062602A1 - 3次元化システム、3次元化方法及びプログラムを記録する記録媒体 - Google Patents
3次元化システム、3次元化方法及びプログラムを記録する記録媒体 Download PDFInfo
- Publication number
- WO2024062602A1 WO2024062602A1 PCT/JP2022/035391 JP2022035391W WO2024062602A1 WO 2024062602 A1 WO2024062602 A1 WO 2024062602A1 JP 2022035391 W JP2022035391 W JP 2022035391W WO 2024062602 A1 WO2024062602 A1 WO 2024062602A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- images
- road
- image
- dimensional
- imaging device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/30—Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
- G01C3/02—Details
- G01C3/06—Use of electric means to obtain final indication
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Definitions
- the present disclosure relates to a three-dimensional system and the like.
- Roads and structures located around roads are subject to damage due to deterioration over time or accidents, so repair work is required.
- Three-dimensional data of roads and structures is sometimes used to support planning of repair work.
- Patent Document 1 discloses a three-dimensional model construction system that collects photographic data photographed by photographing devices provided in each of a plurality of moving objects and generates a three-dimensional model.
- an imaging area of an existing 3D model is specified using supplementary information regarding imaging included in imaging data, and the existing 3D model is updated using new imaging data in the imaging area. be exposed.
- a three-dimensional model can be generated for large fixed objects such as roads, buildings, and bridges.
- large fixed objects such as roads, buildings, and bridges.
- An object of the present disclosure is to provide a three-dimensional system and the like that can generate a highly accurate three-dimensional model.
- a three-dimensional system includes an acquisition unit that acquires a plurality of images photographed by an imaging device installed in each of a plurality of moving objects, and a three-dimensional system that selects a plurality of images from among the plurality of acquired images under predetermined conditions. a selection means for selecting at least two images of the road surface or structures on the road based on the selected images; and generating means for generating a three-dimensional model.
- the three-dimensionalization method acquires a plurality of images taken with an imaging device installed in each of a plurality of moving objects, and selects a plurality of images from among the plurality of acquired images based on predetermined conditions. , select at least two images of the road surface or structures on the road, and use the selected images to generate a three-dimensional model of the photographed road surface or structures on the road; .
- a program acquires a plurality of images taken by an imaging device installed in each of a plurality of moving objects, and selects a road image from among the plurality of acquired images based on predetermined conditions.
- a computer performs a process of selecting at least two images of a road surface or a structure on the road, and using the selected images to generate a three-dimensional model of the photographed road surface or structure on the road. have it executed.
- the program may be stored in a computer-readable non-transitory recording medium.
- FIG. 1 is a diagram showing an outline of a device connected to a three-dimensional system.
- 1 is a block diagram showing a configuration example of a three-dimensional system according to a first embodiment
- FIG. 3 is a flowchart illustrating an example of the operation of the three-dimensional system according to the first embodiment. It is a table showing an example of information included in photographic data.
- FIG. 2 is a block diagram showing an example of the hardware configuration of a computer.
- the surface of paved roads suffers from deterioration such as cracks, potholes, and ruts due to factors such as vehicle driving and rainfall.
- Road structures such as signs, lights, guardrails, and curbs also deteriorate or become damaged. For this reason, road conditions are analyzed in order to understand the state of deterioration of roads and structures and plan repairs for the roads and structures.
- a three-dimensional system uses images selected based on predetermined conditions from images taken by imaging devices installed on a plurality of moving objects to create a road surface or a structure on a road. Generate a 3D model of
- the roads targeted by the 3D system disclosed herein are not limited to roads on which vehicles pass, but also include roads on which people pass.
- the area that the 3D system targets for 3D rendering is not limited to the road itself, but includes road slopes and land on which structures necessary for road management exist.
- FIG. 1 is a diagram showing an overview of a three-dimensional rendering system 100 and devices connected for wired or wireless communication via a communication network 30.
- the three-dimensional rendering system 100 is connected to, for example, an imaging device 10, a display 20, an input device 21, and a database 40.
- the imaging device 10 is installed on a moving body 11 and captures an image including a road or a structure on the road.
- the imaging device 10 is realized by, for example, a drive recorder installed in a car.
- the type of imaging device 10 is not limited to this, and cameras provided on various types of moving bodies 11 may be used.
- the image may be taken with a camera mounted on another moving object such as a bicycle or a drone.
- the image captured by the imaging device 10 may be a still image or a moving image captured while the moving body 11 is moving. Images may be taken at a location designated by a person, or may be taken automatically at arbitrary intervals.
- FIG. 1 one imaging device 10 and one moving object 11 are shown.
- the three-dimensional system 100 may be connected to a plurality of imaging devices 10-1, . . . , 10-n installed in a plurality of moving objects 11-1, . .
- n is a natural number of 2 or more.
- Each of the plurality of moving bodies 11 may be of the same type or may be of different types.
- Each of the plurality of imaging devices 10 may be the same model or may be a different model.
- Photographic data including images photographed by the imaging device 10 is stored in the database 40. Further, the imaging device 10 may transmit photographic data including images to the three-dimensional system 100.
- the photographing data may further include the following image photographing conditions.
- the photographic data may include an identifier that identifies the imaging device 10 that photographed the image.
- the photographic data may include positional information of the point where the image was photographed.
- the location information includes, for example, latitude and longitude, location information based on GNSS (Global Navigation Satellite System), GPS (Global Positioning System), or a location on a map.
- the photographic data may include time information regarding the date and time when the image was photographed.
- the photographing data may include the photographing direction in which the image was photographed.
- the photographing direction includes, for example, the azimuth or elevation/depression angle in which the imaging device 10 is facing.
- the photographing direction can be acquired by a sensor included in the imaging device 10. Further, the photographing direction can be obtained based on the traveling direction of the moving body 11 when the installation direction of the imaging device 10 with respect to the moving body 11 is defined.
- the display 20 displays information to the user.
- the display 20 includes, for example, a display, a tablet, and the like. The information to be displayed will be described later.
- the input device 21 accepts operations from the user.
- the input device 21 includes, for example, a mouse and a keyboard.
- the display 20 is a touch panel display, the display 20 may be configured as the input device 21.
- the database 40 stores photographic data including images photographed by the imaging device 10.
- FIG. 2 is a block diagram showing a configuration example of the three-dimensional system 100 according to the first embodiment.
- the three-dimensional system 100 includes an acquisition section 110, a selection section 120, and a generation section 130.
- the three-dimensional system 100 further includes an output unit 140 as required.
- the acquisition unit 110 acquires a plurality of images including an image taken at a predetermined point, which is taken by the imaging device 10 installed in each of the plurality of moving objects 11.
- the predetermined point is a point for which a three-dimensional model is to be generated.
- the range of points from which the acquisition unit 110 acquires images can be set as appropriate.
- the acquisition unit 110 may acquire a moving image taken at a predetermined point. Further, the acquisition unit 110 may extract and acquire a still image taken at a predetermined point from each of the plurality of moving images.
- the acquisition unit 110 may acquire shooting data including an image. That is, the acquisition unit 110 may acquire, together with the image, location information of the location where the image was captured, the shooting date and time, model information of the imaging device 10, and the like.
- the selection unit 120 selects at least two images of a road surface or a structure on a road from the plurality of acquired images based on predetermined conditions.
- the predetermined conditions are conditions for selecting an image suitable for generating a three-dimensional model of a road surface or a structure on a road.
- a three-dimensional model is data representing the three-dimensional shape and size of an object.
- the three-dimensional model is, for example, three-dimensional point cloud information.
- Images suitable for generating a three-dimensional model are at least two images from which sufficient information required for calculating the three-dimensional shape and size of an object can be obtained.
- Images suitable for generating a three-dimensional model include images that are expected to have parallax with respect to the object.
- the selection unit 120 may select an image suitable for estimating the depth of road deterioration as an image suitable for generating a three-dimensional model of the road surface.
- Road deterioration includes, for example, cracks, potholes and ruts.
- a road structure is an object installed near a road where vehicles and people pass. Structures on the road include, for example, signs, lights, guardrails, curbs, and the like.
- the selection unit 120 selects at least two images based on the acquired photographic data and predetermined conditions. Details of the predetermined conditions will be explained in the second embodiment.
- the selection unit 120 may select, from the images acquired by the acquisition unit 110, images that are taken at the same point and that satisfy the predetermined conditions. For example, the selection unit 120 refers to the position information of the photographed data and extracts images taken at the same point. Furthermore, the selection unit 120 may extract images taken at the same point by estimating the point where the image is taken based on the feature amount of the image. Then, the selection unit 120 may select at least two images that satisfy a predetermined condition from among the extracted images. Alternatively, the selection unit 120 may extract images that satisfy a predetermined condition and then select images taken at the same point.
- the generation unit 130 generates a three-dimensional model of the photographed road surface or structure on the road using at least two images selected by the selection unit 120. For example, the generation unit 130 obtains parameters necessary to process the parallax of the selected image. Necessary parameters are, for example, the distance between the imaging devices 10 and the focal length of the imaging devices 10. The generation unit 130 then generates a three-dimensional model by calculating the distance from the imaging device 10 to the photographed object based on the acquired parameters and the parallax of the selected image.
- the generating unit 130 may generate a three-dimensional model showing the depth of road deterioration as a three-dimensional model of the road surface.
- the depth of road deterioration includes the depth of cracks, the depth of potholes, and the amount of rutting.
- the output unit 140 outputs information to the display 20 based on the generated three-dimensional model.
- the output unit 140 may display the three-dimensional model on the display 20.
- the output unit 140 may output a value of the depth of road deterioration.
- the depth of road deterioration is calculated, for example, based on a change in the depth of a striped portion of a three-dimensional model cut out in the road width direction.
- FIG. 3 is a flowchart showing an example of the operation of the three-dimensional system 100 according to the first embodiment.
- the three-dimensional system 100 may start the operation shown in FIG. 3 in response to a user's operation using the input device 21.
- the acquisition unit 110 acquires a plurality of images taken by the imaging devices 10 installed in each of the plurality of moving objects 11 (step S11).
- the selection unit 120 selects at least two images of the road surface or structures on the road from the multiple images acquired by the acquisition unit 110 based on predetermined conditions (step S12).
- the generation unit 130 generates a three-dimensional model of the photographed road surface or structure on the road using the image selected by the selection unit 120 (step S13).
- the output unit 140 outputs information to the display 20 based on the generated three-dimensional model (step S14).
- the acquisition unit 110 acquires a plurality of images taken by the imaging devices 10 installed in each of the plurality of moving objects 11. Then, the selection unit 120 selects at least two images of the road surface or structures on the road from the plurality of images acquired by the acquisition unit 110 based on predetermined conditions.
- the generation unit 130 generates a three-dimensional model of the photographed road surface or structure on the road using the image selected by the selection unit 120. Since images suitable for generation of a three-dimensional model are selected based on predetermined conditions, the first embodiment makes it possible to generate a highly accurate three-dimensional model.
- Patent Document 2 discloses an imaging system that creates three-dimensional road surface data based on images captured by a stereo camera.
- stereo cameras and optical cutting devices are expensive.
- road conditions can be analyzed using images taken with a monocular camera such as a drive recorder. Therefore, according to the first embodiment, images required for a three-dimensional model can be collected at low cost.
- Patent Document 3 discloses an image processing device that uses a monocular camera to improve the convenience of distance measurement technology using the motion stereo method. It has been difficult to estimate the depth of road deterioration from images taken with a monocular camera such as a drive recorder. In particular, since cracks in the road surface are small, it has been difficult to estimate the depth of the cracks. In order to analyze the depth of road deterioration, it is necessary to accurately convert images into three-dimensional images. According to the first embodiment, an image suitable for estimating the depth of road deterioration is selected from images taken by the imaging device 10 installed in each of the plurality of moving objects 11. Then, using the selected image, the generation unit 130 generates a three-dimensional model representing the depth of road deterioration. Therefore, according to the first embodiment, it is possible to support the creation of a repair plan based on the depth of road deterioration such as cracks, potholes, or ruts, while reducing costs.
- a repair plan based on the depth of road deterioration such as crack
- the acquisition unit 110 acquires photographic data including images taken at a predetermined point.
- FIG. 4 is a table showing an example of information included in the photographic data.
- the photographing data in FIG. 4 includes an image ID (identifier), an imaging device ID, time information, position information, and a photographing direction.
- the image ID is an identifier that identifies an image.
- the image ID may be an identifier that identifies one frame of a moving image.
- the imaging device ID is an identifier that identifies the imaging device 10 that captured the image.
- the imaging data does not need to include the information shown in FIG. 4, and may include information other than the information shown.
- the selection unit 120 may select images based on any combination of conditions described below.
- the predetermined condition is a condition related to the similarity of the images.
- the selection unit 120 may select an image from the multiple images based on the similarity of the images.
- the similarity of the images is calculated by any method. The similarity between images of the same object taken from similar viewpoints at the same location is higher than the similarity between images of different objects taken from different viewpoints at the same location.
- the selection unit 120 selects at least two images whose similarity is higher than a threshold (lower limit). Two images with a degree of similarity higher than the threshold have the same imaging range, and are likely to have captured the same object. Therefore, the distance can be calculated based on the parallax of the object. When the degree of similarity between images is low, the difference in imaging range is too large, and parallax information may be insufficient, making it difficult to calculate distance.
- a threshold lower limit
- the brightness and hue of the image may vary depending on sunlight conditions. Therefore, in order to estimate the difference between the imaging ranges based on the similarity, the similarity between the images may be calculated after converting the brightness and hue of the images. Note that if it is preferable to select images under the same sunlight conditions, the similarity is calculated as is without converting the images.
- the selection unit 120 may select images such that the image is included in at least two images with a degree of similarity lower than the upper limit threshold. This is because distance calculation becomes difficult also when the degree of similarity between images is high and the overlap of imaging ranges is too large.
- the selection unit 120 selects images whose degree of similarity is lower than the upper limit threshold, so that images captured by the respective imaging devices 10 of the plurality of moving objects 11 from different viewpoints can be selected.
- the predetermined condition is a condition regarding the imaging device 10 that captured the image.
- the selection unit 120 may select images so as to include images taken by different imaging devices 10 installed in different moving objects 11. At this time, the selection unit 120 refers to the imaging device ID of the imaging data, for example.
- the selection unit 120 may select at least two images captured by the imaging device 10 by driving one moving body 11 multiple times through a specified point. This is because parallax information can be obtained between at least two images when the driving positions on the road at the specified point are different for the first and second times. That is, the selection unit 120 may select at least two images captured by the imaging device 10 installed on the same moving body 11 at dates and times that differ by a specified time or more. The selection unit 120 may also exclude from the selection at least two images captured continuously by the imaging device 10 while one moving body 11 is driving. Parallax information can only be obtained from two continuously captured images for the moving body 11 moving by one frame. Therefore, continuously captured images may not be suitable for generating a three-dimensional model.
- the predetermined condition is a condition regarding the installation state of the imaging device 10.
- the installation state of the imaging device 10 includes the installation height, the installation angle or lateral position with respect to the moving object 11, and the type of the installed moving object 11.
- the selection unit 120 may select an image based on the installation state of the imaging device 10. For example, the selection unit 120 selects images captured by imaging devices 10 in different installation states. This allows the selection unit 120 to select images with parallax.
- the installation state of the imaging device 10 is stored in the database 40 in association with the imaging device ID.
- the installation height may be expressed by the height of the imaging device 10 installed on the moving body 11 from the ground. Further, the installation height may be expressed by the distance of the imaging device with respect to a predetermined member of the moving body 11.
- the installation angle indicates at what angle the imaging device 10 is installed to take images.
- the installation angle may be expressed by an angle of elevation or an angle of depression. Further, the installation angle may be expressed by the orientation of the imaging device 10 with respect to the front direction of the moving body 11.
- the type of the installed mobile object 11 may be specified by the size and model of the automobile that is the mobile object 11. Further, the type of the mobile object 11 may be specified depending on whether the mobile object 11 is a four-wheeled vehicle, a two-wheeled vehicle, a bicycle, or a drone.
- the selection unit 120 may select images taken by the imaging devices 10 installed in each of a regular car and a bus. Since an ordinary car and a bus are different in size, images from different viewpoints can be selected from the imaging devices 10 installed in each.
- the left/right position relative to the moving body 11 may also be used as the installation state of the imaging device 10.
- the left/right position indicates, for example, where in the width direction of the vehicle, which is the moving body 11, the imaging device 10 is installed.
- the left/right position may also be represented by the distance from the center of the vehicle.
- the predetermined condition is a condition related to the shooting direction included in the shooting data.
- the photographing direction is the azimuth or elevation/depression angle in which the imaging device 10 is facing, which can be acquired by a sensor included in the imaging device 10.
- the selection unit 120 may select an image based on such a shooting direction. For example, the selection unit 120 selects images taken in different shooting directions. This allows the selection unit 120 to select images with parallax.
- the predetermined condition may be a condition regarding the photographing direction of the object included in the image.
- the selection unit 120 may select an image based on a photographing direction with respect to the object, which is specified by recognizing the object through image analysis.
- the photographing direction with respect to the object indicates the positional relationship between the imaging device 10 and the object included in the image.
- Objects included in the image include, for example, road deterioration such as cracks and potholes, road markings such as partition lines, and structures on the road.
- the selection unit 120 may select images taken of the crack from a plurality of directions based on the photographing direction of the crack. This allows the selection unit 120 to select an image suitable for accurately estimating the depth of a crack. According to the photographing direction with respect to the lane marking, the traveling position of the moving object 11 within the lane can be determined. The selection unit 120 selects images based on the photographing direction with respect to the lane markings, so that the selection unit 120 can select images taken while driving at different positions within the lane. Therefore, the selection unit 120 can select images with parallax.
- the predetermined condition may be a condition regarding bias in photographic data.
- the selection unit 120 selects more images, it is expected that the generation unit 130 will be able to generate a three-dimensional model with higher accuracy using the selected images.
- the selection unit 120 since it is possible to use images automatically captured by a drive recorder of a vehicle that drives on a daily basis, many similar images can be captured at a predetermined point. Even if many similar images are selected, the accuracy of the three-dimensional model does not increase. Therefore, the selection unit 120 may select images shot from various viewpoints in a well-balanced manner based on the bias of the shooting data as described below.
- the selection unit 120 may select images so as to reduce bias in image similarity.
- the selection unit 120 may select images such that the bias between images with a high degree of similarity and images with a low degree of similarity is small.
- the selection unit 120 may select images so as to reduce the bias in the installation state. For example, when there is a large amount of photographic data from the imaging device 10 installed in an ordinary car, there will be many images photographed from low viewpoints. Therefore, the selection unit 120 equally selects images taken from a higher viewpoint by the imaging device 10 installed in a bus, a garbage truck, or the like.
- the predetermined condition is a condition regarding the presence or absence of road deterioration on the road in the image.
- the selection unit 120 may select images in which road deterioration can be detected. Furthermore, the selection unit 120 may select more images for areas with road deterioration than for areas without road deterioration, based on the presence or absence of road deterioration. This allows the generation unit 130 to express the depth of road deterioration with higher accuracy using more images.
- the selection unit 120 selects a plurality of images in which road deterioration can be detected from among images taken at a predetermined point. Then, the selection unit 120 further selects an image suitable for generating a three-dimensional model of road deterioration based on the degree of similarity of the selected images. Therefore, the selection unit 120 selects at least two similar images from among images in which road deterioration is well depicted. Thereby, the generation unit 130 can generate a three-dimensional model of road surface deterioration with high accuracy.
- the selection unit 120 may select images based on various types of information. For example, the selection unit 120 may select images based on time information of photographic data. The selection unit 120 may select images taken during the same time period. Further, the selection unit 120 may select images taken during a time period such as daytime when there are few shadows due to road deterioration. The selection unit 120 may further select images taken in the same weather based on weather information on the date and time the images were taken. Further, the selection unit 120 may select images captured by the same model of imaging device 10.
- the predetermined conditions under which the selection unit 120 selects images suitable for generating a three-dimensional model of a road surface or a structure on a road have been described above. Next, generation of a three-dimensional model by the generation unit 130 will be explained.
- the generation unit 130 performs matching processing on corresponding points of the image selected by the selection unit 120.
- the generation unit 130 uses an arbitrary matching algorithm to find a corresponding pixel in the other image for a certain reference pixel in one image. For example, the generation unit 130 extracts feature points from images and associates feature points between images.
- the generation unit 130 may perform matching by converting the brightness or hue of the image in order to eliminate the influence of sunlight conditions.
- the generation unit 130 may use information about the shooting direction and installation state included in the shooting data for matching the corresponding points.
- the generation unit 130 may detect an area with road deterioration and perform matching processing on the detected area with road deterioration.
- the generation unit 130 detects road deterioration using a known image recognition technique on the image.
- the generation unit 130 may detect road deterioration using the learned model.
- the generation unit 130 may determine whether or not the road is deteriorated for each pixel of the image.
- the generation unit 130 then associates points representing road deterioration between images. For example, the generation unit 130 may detect a crack area from each of the two images and find pixels to which each of the detected cracks corresponds.
- the generation unit 130 may detect a road area and perform matching processing on the detected road area. Thereby, the generation unit 130 can prevent, for example, matching the road area of one image with the building area of the other image.
- the generation unit 130 calculates the three-dimensional coordinates of the points associated between the images based on the photographic data. For example, the generation unit 130 calculates the distance using the principle of triangulation based on the focal length of the imaging device 10, the parallax between the matched reference pixel and the corresponding pixel, and the distance between the imaging devices 10 that captured the image. The generation unit 130 generates three-dimensional point group information based on the distance calculated for each pixel.
- the distance between the imaging devices 10 that photographed the images is a necessary parameter for processing the parallax of the images, and is also referred to as the baseline length.
- the generation unit 130 calculates the baseline length based on the imaging conditions included in the imaging data, such as the installation state of the imaging device 10 and the imaging direction of the image. Furthermore, the generation unit 130 may calculate the baseline length based on the position of the imaging device 10 on the road at the time of imaging, which is estimated from the image. The position on the road is estimated from the appearance of the reference object in the image. The left and right positions on the road are estimated, for example, based on how the lane markings are visible.
- the generation unit 130 may convert the three-dimensional point cloud information so that it is easier to visually recognize. For example, the generation unit 130 performs texture mapping by pasting an area surrounded by three nearby feature points of the image to three three-dimensional coordinate points formed by the three feature points. Thereby, the generation unit 130 can generate a three-dimensional model composed of triangular polygon surfaces.
- each component of the three-dimensional system 100 represents a functional unit block. A part or all of each component of the three-dimensional system 100 may be realized by any combination of the computer 500 and a program.
- FIG. 5 is a block diagram showing an example of the hardware configuration of the computer 500.
- the computer 500 includes, for example, a processor 501, a ROM (Read Only Memory) 502, a RAM (Random Access Memory) 503, a program 504, a storage device 505, a drive device 507, a communication interface 508, an input device 509, It includes an input/output interface 511 and a bus 512.
- a processor 501 controls the entire computer 500.
- Examples of the processor 501 include a CPU (Central Processing Unit).
- the number of processors 501 is not particularly limited, and the number of processors 501 is one or more.
- the program 504 includes instructions for realizing each function of the three-dimensional visualization system 100.
- the program 504 is stored in advance in the ROM 502, RAM 503, or storage device 505.
- the processor 501 realizes each function of the three-dimensional visualization system 100 by executing instructions included in the program 504.
- the RAM 503 may store data processed in each function of the three-dimensional system 100.
- the captured image may be stored in the RAM 503 of the computer 500.
- the drive device 507 reads and writes data from the recording medium 506.
- the communication interface 508 provides an interface with a communication network.
- the input device 509 is, for example, a mouse or keyboard, and accepts information input from an administrator or the like.
- the output device 510 is, for example, a display, and outputs (displays) information to an administrator or the like.
- the input/output interface 511 provides an interface with peripheral devices.
- the bus 512 connects these hardware components.
- the program 504 may be supplied to the processor 501 via a communication network, or may be stored in advance on the recording medium 506 and read out by the drive device 507 and supplied to the processor 501.
- FIG. 5 is an example, and components other than these may be added, or some components may not be included.
- the three-dimensional system 100 may be realized by any combination of different computers and programs for each component.
- the plurality of components included in the three-dimensional system 100 may be realized by an arbitrary combination of one computer and a program.
- At least a part of the three-dimensional system 100 may be provided in a SaaS (Software as a Service) format. That is, at least part of the functions for realizing the three-dimensional system 100 may be executed by software executed via a network.
- SaaS Software as a Service
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
Description
図2は、第1実施形態に係る3次元化システム100の構成例を示すブロック図である。3次元化システム100は、取得部110、選択部120及び生成部130を備える。3次元化システム100は、必要に応じてさらに、出力部140を備える。
次に第2実施形態として、3次元化システム100についてより詳細に説明する。第2実施形態の構成について、第1実施形態と同様の構成については説明を省略する。
上述した各実施形態において、3次元化システム100の各構成要素は、機能単位のブロックを示している。3次元化システム100の各構成要素の一部又は全部は、コンピュータ500とプログラムとの任意の組み合わせにより実現されてもよい。
110 取得部
120 選択部
130 生成部
140 出力部
10 撮像装置
20 ディスプレイ
21 入力機器
30 通信ネットワーク
40 データベース
Claims (10)
- 複数の移動体それぞれに設置された撮像装置にて撮影された複数の画像を取得する取得手段と、
取得された前記複数の画像の中から、所定の条件に基づいて、道路の路面又は道路上の構造物を撮影した少なくとも2枚の画像を選択する選択手段と、
選択された前記画像を用いて、撮影された道路の路面又は道路上の構造物の3次元モデルを生成する生成手段と
を備える3次元化システム。 - 前記選択手段は、前記画像の類似度に基づいて、前記画像を選択する
請求項1に記載の3次元化システム。 - 前記選択手段は、前記撮像装置の設置状態に基づいて、前記画像を選択する
請求項1または2に記載の3次元化システム。 - 前記選択手段は、前記画像に含まれる物体に対する撮影方向に基づいて、前記画像を選択する
請求項1乃至3の何れか1項に記載の3次元化システム。 - 前記選択手段は、前記撮像装置の設置された前記移動体の種類に基づいて、前記画像を選択する
請求項1乃至4の何れか1項に記載の3次元化システム。 - 前記選択手段は、前記画像の類似度の偏りを小さくするように前記画像を選択する
請求項2に記載の3次元化システム。 - 前記選択手段は、道路劣化の深さを推定するに適した前記画像を選択し、
前記生成手段は、撮影された前記道路の路面の道路劣化の深さを示す前記3次元モデルを生成する
請求項1乃至6の何れか1項に記載の3次元化システム。 - 前記選択手段は、道路劣化を検出可能な前記画像を選択する
請求項7に記載の3次元化システム。 - 複数の移動体それぞれに設置された撮像装置にて撮影された複数の画像を取得し、
取得された前記複数の画像の中から、所定の条件に基づいて、道路の路面又は道路上の構造物を撮影した少なくとも2枚の画像を選択し、
選択された前記画像を用いて、撮影された道路の路面又は道路上の構造物の3次元モデルを生成する
3次元化方法。 - 複数の移動体それぞれに設置された撮像装置にて撮影された複数の画像を取得し、
取得された前記複数の画像の中から、所定の条件に基づいて、道路の路面又は道路上の構造物を撮影した少なくとも2枚の画像を選択し、
選択された前記画像を用いて、撮影された道路の路面又は道路上の構造物の3次元モデルを生成する
処理をコンピュータに実行させるプログラムを非一時的に記録する記録媒体。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US18/873,772 US20250371690A1 (en) | 2022-09-22 | 2022-09-22 | Three-dimensionalization system, three-dimensionalization method, and recording medium for generating model of road surface or structure on road |
| JP2024548036A JP7806918B2 (ja) | 2022-09-22 | 3次元化システム、3次元化方法及びプログラム | |
| PCT/JP2022/035391 WO2024062602A1 (ja) | 2022-09-22 | 2022-09-22 | 3次元化システム、3次元化方法及びプログラムを記録する記録媒体 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2022/035391 WO2024062602A1 (ja) | 2022-09-22 | 2022-09-22 | 3次元化システム、3次元化方法及びプログラムを記録する記録媒体 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024062602A1 true WO2024062602A1 (ja) | 2024-03-28 |
Family
ID=90454057
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2022/035391 Ceased WO2024062602A1 (ja) | 2022-09-22 | 2022-09-22 | 3次元化システム、3次元化方法及びプログラムを記録する記録媒体 |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US20250371690A1 (ja) |
| WO (1) | WO2024062602A1 (ja) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2013079889A (ja) * | 2011-10-05 | 2013-05-02 | Shuichi Kameyama | 路面凹凸評価システム |
| JP2014186004A (ja) * | 2013-03-25 | 2014-10-02 | Toshiba Corp | 計測装置、方法及びプログラム |
| JP2020046228A (ja) * | 2018-09-14 | 2020-03-26 | 株式会社リコー | 計測装置、計測システムおよび車両 |
| JP2021177317A (ja) * | 2020-05-08 | 2021-11-11 | シンメトリー・ディメンションズ・インク | 3次元モデル構築システム、および3次元モデル構築方法 |
-
2022
- 2022-09-22 WO PCT/JP2022/035391 patent/WO2024062602A1/ja not_active Ceased
- 2022-09-22 US US18/873,772 patent/US20250371690A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2013079889A (ja) * | 2011-10-05 | 2013-05-02 | Shuichi Kameyama | 路面凹凸評価システム |
| JP2014186004A (ja) * | 2013-03-25 | 2014-10-02 | Toshiba Corp | 計測装置、方法及びプログラム |
| JP2020046228A (ja) * | 2018-09-14 | 2020-03-26 | 株式会社リコー | 計測装置、計測システムおよび車両 |
| JP2021177317A (ja) * | 2020-05-08 | 2021-11-11 | シンメトリー・ディメンションズ・インク | 3次元モデル構築システム、および3次元モデル構築方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2024062602A1 (ja) | 2024-03-28 |
| US20250371690A1 (en) | 2025-12-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10962366B2 (en) | Visual odometry and pairwise alignment for high definition map creation | |
| CA2678156C (en) | Measurement apparatus, measurement method, and feature identification apparatus | |
| JP4284644B2 (ja) | 3次元モデル構築システム及び3次元モデル構築プログラム | |
| KR20220064524A (ko) | 이미지 기반 측위 방법 및 시스템 | |
| US10872246B2 (en) | Vehicle lane detection system | |
| KR102200299B1 (ko) | 3d-vr 멀티센서 시스템 기반의 도로 시설물 관리 솔루션을 구현하는 시스템 및 그 방법 | |
| JP2020500290A (ja) | 位置特定基準データを生成及び使用する方法及びシステム | |
| KR102218881B1 (ko) | 차량 위치 결정 방법 및 시스템 | |
| WO2021017211A1 (zh) | 一种基于视觉的车辆定位方法、装置及车载终端 | |
| CN112749584B (zh) | 一种基于图像检测的车辆定位方法及车载终端 | |
| JP7610741B2 (ja) | 地物管理システム | |
| CA3040599C (en) | Method and system for generating environment model and for positioning using cross-sensor feature point referencing | |
| JPWO2016031229A1 (ja) | 道路地図作成システム、データ処理装置および車載装置 | |
| CN114765972A (zh) | 用于示出车辆的周围环境模型的显示方法、计算机程序、控制器和车辆 | |
| KR102441100B1 (ko) | Las 데이터를 이용한 도로 지문 데이터 구축 시스템 및 그 방법 | |
| US11485373B2 (en) | Method for a position determination of a vehicle, control unit, and vehicle | |
| JP7806918B2 (ja) | 3次元化システム、3次元化方法及びプログラム | |
| KR20220050386A (ko) | 맵 생성 방법 및 이를 이용한 이미지 기반 측위 시스템 | |
| US10762690B1 (en) | Simulated overhead perspective images with removal of obstructions | |
| WO2024062602A1 (ja) | 3次元化システム、3次元化方法及びプログラムを記録する記録媒体 | |
| Dai et al. | Roadside edge sensed and fused three-dimensional localization using camera and lidar | |
| CN112530270B (zh) | 一种基于区域分配的建图方法及装置 | |
| Niskanen et al. | Trench visualisation from a semiautonomous excavator with a base grid map using a TOF 2D profilometer | |
| Gunay et al. | Semi-automatic true orthophoto production by using LIDAR data | |
| KR20250170382A (ko) | 지도 업데이트를 위한 장치 및 그 방법 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22959566 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 18873772 Country of ref document: US |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024548036 Country of ref document: JP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22959566 Country of ref document: EP Kind code of ref document: A1 |
|
| WWP | Wipo information: published in national office |
Ref document number: 18873772 Country of ref document: US |