[go: up one dir, main page]

WO2019236064A1 - Automatically updating a georeferencing graphical user interface for navigation line adjustments - Google Patents

Automatically updating a georeferencing graphical user interface for navigation line adjustments Download PDF

Info

Publication number
WO2019236064A1
WO2019236064A1 PCT/US2018/036028 US2018036028W WO2019236064A1 WO 2019236064 A1 WO2019236064 A1 WO 2019236064A1 US 2018036028 W US2018036028 W US 2018036028W WO 2019236064 A1 WO2019236064 A1 WO 2019236064A1
Authority
WO
WIPO (PCT)
Prior art keywords
navigation line
line
view
sectional
cross
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2018/036028
Other languages
French (fr)
Inventor
Edward Patrick Ellwyn Collins
Dominic Allan RORKE
Thomas Bartholomew O'TOOLE
James Iain SCOTCHMAN
Benjamin Stephen SAUNDERS
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Landmark Graphics Corp
Original Assignee
Landmark Graphics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Landmark Graphics Corp filed Critical Landmark Graphics Corp
Priority to PCT/US2018/036028 priority Critical patent/WO2019236064A1/en
Priority to FR1903593A priority patent/FR3082026A1/en
Publication of WO2019236064A1 publication Critical patent/WO2019236064A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C13/00Surveying specially adapted to open water, e.g. sea, lake, river or canal
    • G01C13/008Surveying specially adapted to open water, e.g. sea, lake, river or canal measuring depth of open water
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/56Display arrangements
    • G01S7/62Cathode-ray tube displays
    • G01S7/6218Cathode-ray tube displays providing two-dimensional coordinated display of distance and direction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/56Display arrangements
    • G01S7/62Cathode-ray tube displays
    • G01S7/6281Composite displays, e.g. split-screen, multiple images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63BSHIPS OR OTHER WATERBORNE VESSELS; EQUIPMENT FOR SHIPPING 
    • B63B49/00Arrangements of nautical instruments or navigational aids
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the present disclosure relates generally to systems and methods for use in a graphical user interface. More specifically, but not by way of limitation, this disclosure relates to automatically updating a georeferencing graphical user interface for navigation line adjustments.
  • FIG. 1 is a contextual view of an example of a data-acquiring system gathering data points for a subsurface structure image according to one aspect of the disclosure.
  • FIG. 2 is a block diagram of an example of a computing device usable for executing program code for using dual views of a geographic feature in both plan and cross-sectional views generated using data to modify a navigation line to form a line of section according to one aspect of the disclosure.
  • FIG. 3 is a flowchart describing a process for automatically updating a position in a navigation line in a first window in response to a change to the navigation line in a second window according to one aspect of the disclosure.
  • FIG. 4 is a flowchart describing a process for importing, error-checking, updating, and converting a line of section using a graphical user interface according to one aspect of the disclosure.
  • FIG. 5 is a display of an example of a graphical user interface including a digitized georeferencing representation in a plan view and a line of section data capture form according to one aspect of the disclosure.
  • FIG. 6 is a display of an example of a graphical user interface including a window for importing a cross-sectional image and its associated attributes according to one aspect of the disclosure.
  • FIG. 7 is a display of an example of a graphical user interface including a window for editing a cross-sectional image and its associated attributes according to one aspect of the disclosure.
  • FIG. 8 is a display of an example of a graphical user interface including an image editing window for cropping and rotating a cross-sectional view of a subsurface structure image according to one aspect of the disclosure.
  • FIG. 9 is a display of an example of a graphical user interface including a cross-sectional view of a subsurface structure image for comparison with a digital elevation module profile according to one aspect of the disclosure.
  • FIG. 10 is a display of an example of a graphical user interface including a digitized georeferencing representation in a plan view that is updated based on a comparison of a subsurface structure image and digital elevation module according to one aspect of the disclosure.
  • FIG. 11 is a display of an example of a graphical user interface including a revised line of section data capture form according to one aspect of the disclosure.
  • Certain aspects and features relate to using dual views of a geographic feature in both plan and cross-sectional views generated using data to modify a navigation line to form and position a line of section. These aspects and features relate to providing a system and method for updating one window of a graphical user interface displaying a plan view of a navigation line in response to automatically detecting an adjustment to the navigation line in another window shown as a cross- sectional view.
  • the process can include accurately and efficiently mapping, automatically adjusting, transforming, and storing subsurface structure attributes and corresponding raster images in a single database to reduce uncertainty in bathymetry and topography applications where raw georeferencing data is scarce or has been lost.
  • a graphical user interface can contain user-viewable windows for displaying a navigation line in various formats and with respect to different formation representations and profiles.
  • a navigation line corresponding to the traversed path of a data-acquiring system such as a vessel in the form of a boat, can be displayed in a window of the graphical user interface as a plan view of a digitized georeferencing representation.
  • the digitized georeferencing representation can be defined by attributes and associated with subsurface structure images corresponding to measured data points acquired by the data-acquiring system along the navigation line.
  • the graphical user interface can generate a cross- sectional view including the subsurface structure image corresponding to the navigation line depicted within the digitized georeferencing representation (e.g., FIG. 9).
  • the cross-sectional view can allow for comparison of the subsurface structure image with the elevations corresponding to the navigation line position for purposes of adjusting the position of the subsurface structure image to more accurately append to the elevation profile.
  • the cross- sectional view can further allow for comparison of the navigation line and appended subsurface structure image against a digital elevation model profile for purposes of more accurately locating the navigation line, and the navigations line’s corresponding attributes and subsurface structure image, with respect to its relative position in three-dimensional space shown by the digitized georeferencing representation (e.g., FIG 10).
  • the geometry (e.g., associated attributes) of the navigation line can be adjusted in terms of orientation, location, and line length when comparing the navigation line to the subsurface structure image in the cross-sectional view window.
  • the graphical user interface can adjust the position and orientation of the navigation line in the other window displaying a plan view of the digitized georeferencing representation.
  • the graphical user interface can store the adjusted navigation line as a line of section. The line of section can then be transformed into an appropriate format to be used within a three-dimensional viewing environment (e.g., SEG-Y).
  • the aforementioned example i.e. automatically detecting an adjustment to a navigation line in a cross-sectional view in a second graphical user interface window and updating the position and orientation of the navigation line in a plan view of a first graphical user interface window in response to the detected adjustment
  • This workflow which can include the automatic navigation line adjustment, can standardize the processes from (i) importing a subsurface structure image with associated attributes to (ii) using the line of section in its three-dimensional viewing format.
  • the computer-implemented workflow may include steps such as (i) identifying a subsurface structure image from a data source (e.g., database 210 of FIG. 2), (ii) extracting georeferencing coordinates from the data source, digitizing or georeferencing the navigation line, (iii) storing the navigation line and its associated attributes, (iv) appending the subsurface structure image to the navigation line via geometric manipulation (e.g., cropping, rotating, stretching, shrinking, etc.), and (v) loading a digital elevation model profile.
  • a data source e.g., database 210 of FIG. 2
  • extracting georeferencing coordinates from the data source e.g., digitizing or georeferencing the navigation line
  • storing the navigation line and its associated attributes e.g., cropping, rotating, stretching, shrinking, etc.
  • appending the subsurface structure image to the navigation line via geometric manipulation e.g., cropping, rotating, stretching, shrinking, etc.
  • a navigation line can be a line corresponding to a traversed pathway or journey of a data-acquiring system during which the system recorded data about the bathymetry and/or topography of a surface.
  • the navigation line can be linear, relatively linear, or of any geometry resulting from implementation of any conventional bathymetric and topographic measuring methods.
  • the navigation line can include any associated attributes related to the recorded bathymetric and topographic data including a range of depth measurements, a top depth, a base depth, an image type, information about the source of the recorded data, a measurement unit type, a geological age range, a domain type, a navigation line length, and a navigation line geometry.
  • the navigation line can be associated with a corresponding location map or georeferencing coordinates, which can be used to locate the navigation line at a location or series of locations or coordinates within three-dimensional space.
  • the attributes, location map, and georeferencing coordinates can be retrieved from various data sources (e.g., a database). The retrieved attributes, location map, or georeferencing coordinates may not be accurate with respect to the real-world locations at which the data-acquiring system recorded data, and the data recorded at those locations may not be accurate.
  • the attributes, location map, or georeferencing coordinates associated with the navigation line can be used to digitize the line into a software application or graphical user interface (e.g., ESRI ArcGiS/ArcMap) to create a digitized georeferenced representation of the line within a digitized georeferencing representation.
  • the location map or georeferencing coordinates can be used to identify an initial location of the navigation line within the digitized georeferencing representation, and the attributes of the navigation line can further define the navigation line at those identified navigation line locations.
  • the digitized georeferencing representation can be a plan view (e.g., map view) of a location or area in three-dimensional space.
  • the digitized georeferencing representation can include colors, shading, or any other conventional markings to represent a range of disparate elevations across a surface. Elevation measurements recorded by a data-acquiring system can correspond to the relative elevations shown at the location or series of locations of the digitized georeferenced navigation line within the digitized georeferencing representation.
  • the navigation line can be associated with a corresponding subsurface structure image.
  • a subsurface structure image can be a two-dimensional vertical image representing a range of depths measured along the pathway of a data-acquiring system.
  • the range of depths depicted by a subsurface structure image can be represented by one or more sets of elevation lines defining a bathymetry of a surface or multiple surfaces.
  • the navigation line can correspond to the pathway of the data-acquiring system.
  • a subsurface structure image and navigation line can be compared to a digital elevation model profile for purposes of matching the navigation line and corresponding subsurface structure image to the digital elevation model profile.
  • a subsurface structure image can include elevation measurement data for the length of the navigation line.
  • the position or orientation of the navigation line can be adjusted to better conform to the digital elevation model profile, which can result in the attributes and the digitized georeferenced location or locations of the navigation line being appended to represent the adjusted navigation line.
  • the adjusted navigation line can more accurately define the locations at which the data- acquiring system acquired the depth measurements than the original navigation line.
  • the digital elevation model profile can be a three-dimensional computer graphics representation of a terrain or underwater surface including a range of elevations across an area.
  • a subsurface structure image can correspond to a two-dimensional representation of a range of depths that can be geolocated within three-dimensional space using the digital elevation model profile.
  • a three- dimensional viewing environment can include multiple data points corresponding to multiple subsurface structure images.
  • a line of section can be the resulting pairing of a subsurface structure image with a corresponding navigation line.
  • a line of section can include information relating to both a navigation line and its corresponding subsurface structure image, which can be stored and used within a three- dimensional viewing environment in further bathymetry or topography applications.
  • the three-dimensional viewing environment can implement a SEG-Y file format, which is a standardized file format for storing geophysical data.
  • FIG. 1 is a contextual view depicting a data-acquiring system gathering data points for a subsurface structure image according to one example.
  • the data-acquiring system 102 can travel over a subsurface 108 while simultaneously measuring elevation data points corresponding to the subsurface 108 with a data collection device 104.
  • the data-acquiring system 102 can also measure data points corresponding to one or more subsurfaces (e.g., subsurface 110) located beneath the subsurface 108.
  • Primary functions of the data-acquiring system 102 can include determining location, geometry, and other attributes of subsurface structures (e.g., subsurface formations).
  • the data-acquiring system 102 can measure data points and features related to bathymetry and topography measurements.
  • the data-acquiring system 102 can be any conventional guided or controllable tool or vessel implemented by conventional methods for measuring bathymetry data.
  • the data acquiring vessel can be a ship including the data collection device 104.
  • the data collection device 104 can be any device that can implement any conventional method for measuring bathymetry data.
  • the data collection device 104 can implement methods for measuring bathymetry data including depth sounding, sonar, LIDAR/LADAR, and single-beam or multi-beam echosounding.
  • other conventional methods for subsurface elevation data collection such as satellite radar, can be used in place of the data acquiring vessel 102 and the data collection device 104.
  • the aforementioned example methods for measuring bathymetry data can be used to measure data points useable for generating a digital elevation model profile.
  • data collection device 104 can transmit feedback waves 106.
  • the feedback waves 106 can be sound or light waves used to determine the depths of the subsurface 108 below the data-acquiring system 102.
  • the feedback waves 106 can be emitted by the data collection device 104 and traverse the water until reaching an object of significant size and density.
  • the feedback waves 106 can be reflected by the subsurface 108. After being reflected by the subsurface 108, the feedback waves can traverse the water back to the data collection device 104.
  • the data collection device 104 can determine the depths of the subsurface 108 at any given location by determining the total travel time, or two- way-time, from initially sending feedback waves 106 to receiving feedback waves 106 after being reflected by the subsurface 108.
  • a longer two-way-time can represent a larger depth value (i.e. a lower elevation level) compared to a shorter two-way-time representing a smaller depth value (i.e. a higher elevation level).
  • the data collection unit 104 can construct a subsurface structure image representing the depths at each measurement point along the navigation line of the data-acquiring system 102.
  • FIG. 2 is a block diagram of a computing device 200 usable for executing program code for using dual views of a geographic feature in both plan and cross-section views generated using data to modify a navigation line to form a line of section according to one example.
  • the computing device 200 can include a processor 202, a bus 204, a communications port 206, and a memory 208.
  • the components shown in FIG. 2 e.g., the processor 202, the bus 204, the communications port 206, and the memory 208 can be integrated into a single structure.
  • the components can be within a single housing.
  • the components shown in FIG. 2 can be distributed (e.g., in separate housings) and in electrical communication with each other.
  • the processor 202 can execute one or more operations for implementing some examples.
  • the processor 202 can execute instructions stored in the memory 208 to perform the operations.
  • the processor 202 can include one processing device or multiple processing devices.
  • Non-limiting examples of the processor 202 include a Field-Programmable Gate Array (“FPGA”), an application- specific integrated circuit (“ASIC”), a microprocessor, etc.
  • the processor 202 can be communicatively coupled to the memory 208 via the bus 204.
  • the non-volatile memory 208 may include any type of memory device that retains stored information when powered off.
  • Non-limiting examples of the memory 208 include electrically erasable and programmable read-only memory (“EEPROM”), flash memory, or any other type of non-volatile memory.
  • EEPROM electrically erasable and programmable read-only memory
  • flash memory or any other type of non-volatile memory.
  • at least some of the memory 208 can include a medium from which the processor 202 can read instructions.
  • a computer-readable medium can include electronic, optical, magnetic, or other storage devices capable of providing the processor 202 with computer-readable instructions or other program code.
  • Non- limiting examples of a computer-readable medium include (but are not limited to) magnetic disk(s), memory chip(s), ROM, random-access memory (“RAM”), an ASIC, a configured processor, optical storage, or any other medium from which a computer processor can read instructions.
  • the instructions can include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C#, etc.
  • the database 210 can include information related to bathymetry data including location maps, georeferencing coordinates, navigation line attributes, and subsurface structure images.
  • Communications port 206 can interface with the database 210 to transfer location maps, georeferencing coordinates, navigation line attributes, or subsurface structure images to the computing device 200.
  • Bathymetry data received by the communications port 206 can be transmitted to the memory 208 via the bus 204.
  • the memory 208 can store any received bathymetry data received and any data relating to a line of section for implementing some examples.
  • the memory 208 can store at least some characteristics of the cross-sectional and plan views from each window along with their associated attributes.
  • the memory 208 can include program code for a user interface generation module 212, a display module 214, a user interface update module 216, and a line analysis module 218.
  • the user interface generation module 212 can generate non-proprietary user interface windows within a proprietary graphical user interface architecture in preparation of displaying the interface windows to a user.
  • the display module 214 can output the interface windows generated by user interface generation module 212 to a user along with any associated proprietary interface windows.
  • the user interface update module 216 can update information or visual representations in one or more interface windows in response to automatically detecting a change in information (e.g., adjustment to the position of a navigation line) in another interface window.
  • the line analysis module 218 can detect a change in the position or geometry of a navigation line for the purpose of activating the user interface update module 216.
  • FIG. 3 is a flowchart describing a process for automatically updating a position in a navigation line in a first window in response to a change to the navigation line in a second window according to one aspect of the disclosure.
  • the display module 214 displays a first window of a user interface as a plan view.
  • the user interface in the first window can be a digitized georeferencing representation that can include a digitized georeferenced navigation line representing a real-world navigation line.
  • the first window containing the digitized georeferenced navigation line within the digitized georeferencing representation in a plan view can depict the relative elevations across which a data acquiring system can record bathymetric and topographic information (e.g., the graphical user interface 502 of FIG. 5, where the navigation line 508 crosses various elevations shown in the plan view).
  • the digitized georeferenced navigation line can be generated using or derived from bathymetry data measured by a data-acquiring system (e.g., data- acquiring system 102 of FIG. 1 ). Measurements and coordinates derived from the data-acquiring system may not accurately represent the real-world measurement locations and may only describe estimated measurement locations. As such, additional data points from various data sources (e.g., database 210 of FIG. 2) can be used to more precisely and accurately identify the locations representing the navigation line (i.e. the identified locations of the navigation line can be used to update the corresponding depiction of the digitized navigation line within the digitized georeferencing representation).
  • a data-acquiring system e.g., data- acquiring system 102 of FIG. 1
  • Measurements and coordinates derived from the data-acquiring system may not accurately represent the real-world measurement locations and may only describe estimated measurement locations.
  • additional data points from various data sources e.g., database 210 of FIG. 2
  • the identified locations of the navigation line can be used to update the corresponding
  • Any relevant attributes corresponding to the navigation line can be linked to the digitized georeferenced navigation line such that those attributes can be adjusted to alter the characteristics of the digitized georeferenced navigation line (e.g., domain type, line location, line geometry) in a separate window.
  • the user interface generation module 212 generates and then displays, via the display module 214, a second window of the user interface as a cross-sectional view (e.g., FIG. 9).
  • the cross-sectional view in the second window can include a subsurface structure image and a digital elevation model profile line corresponding to the navigation line.
  • the digital elevation model profile line can be a two-dimensional representation of at least some elevations stored in a three- dimensional digital elevation model profile.
  • the elevations represented by the digital elevation model profile line can correspond to the elevations at the locations (e.g., pathway) of the navigation line.
  • the resulting graphical user interface can display, via display module 214, two separate user interface windows: (i) a plan view interface depicting a navigation line (e.g., FIG. 5), and (ii) a cross-sectional view interface depicting a subsurface structure image and the digital elevation model profile line corresponding to the navigation line.
  • the digital elevation model profile line of the cross-sectional view in the second window can represent the elevation levels at the position of the navigation line in the first window.
  • the graphical user interface can generate, in a separate view of the same window, a cross-sectional view including the subsurface structure image corresponding to the digitized georeferencing representation and the navigation line.
  • the line analysis module 218 automatically detects an adjustment to the position of the navigation line by comparing the cross-sectional view to a digital elevation model profile.
  • a digital elevation model profile can be loaded and superimposed as a digital elevation model profile line against the cross- sectional view including a subsurface structure image in the second window. This can allow for comparison of the digital elevation model profile against the bathymetry/topography of the subsurface structure image corresponding to the navigation line for the purposes of detecting an adjustment to the navigation line.
  • the digital elevation model profile can contain three-dimensional elevation measurements encompassing the locations at which the data-acquiring system recorded the depth measurements.
  • the second window can include the subsurface structure image corresponding to the navigation line, where the subsurface structure image includes bathymetry/topography data points. In the second window, the digital elevation model profile can inlay its subsurface elevation measurements as a digital elevation model profile line corresponding to the position of the navigation line in the first window.
  • the bathymetry/topography of the subsurface structure image corresponding to the navigation line and digital elevation model profile line corresponding to the position of the navigation line can be compared in the second window. Comparing the subsurface structure image to the digital elevation model profile line can be performed to determine if the navigation line is positioned in the digitized georeferencing representation to accurately represent its location in the real world. A closer match of the digital elevation model profile line to the subsurface structure image can signify that the navigation line is closer to its appropriate location within the digitized georeferencing representation. A less close match between the digital elevation model profile line and the subsurface structure image can signify that the navigation line is not as close to its appropriate location within the digitized georeferencing representation. A less close match can result in the navigation line being adjusted to result in a closer match within the second window.
  • the navigation line and associated attributes can be derived from various data sources and may therefore not be accurate with respect to the navigation line’s real-world position.
  • This adjustment to the navigation line in the second window resulting from the comparison can be performed in order to more accurately geolocate the navigation line, therefore improving the accuracy of the navigation line and corresponding attributes when stored and used within a bathymetry/topography application database or other three-dimensional viewing environment.
  • the line analysis module 218 can automatically detect a change in the position or orientation of the navigation line in the second window.
  • the user interface update module 216 changes the position of the navigation line in the first window in response to automatically detecting, via the line analysis module 218, an adjustment to the position of the navigation line via the second window.
  • the user interface update module 216 can change the position of the navigation line in the first window displaying the plan view.
  • the change in the position or orientation of the navigation line in the first window can produce an adjusted navigation line, which can include appended attributes.
  • the appended attributes of the adjusted navigation line can directly correspond to the adjustments that were made to the original navigation line via the second window, such that the attributes of the original navigation line differ from the attributes of the adjusted navigation line.
  • the adjusted navigation line and associated appended attributes may be stored as a line of section without also storing the cross-sectional view of the second window or the plan view from the first window.
  • the associated appended attributes of the navigation line can inherently include adjusted reference points and updated georeferencing coordinates that can be used to redraw the navigation line in the plan view at its appropriate location and with its appropriate line geometry.
  • the adjusted navigation line with appended attributes can be stored as a line of section separate from the stored subsurface structure image.
  • the subsurface structure image corresponding to the navigation line can be stored prior to and in preparation of the processes described in FIG. 3.
  • the line of section is transformed into a format to be used within a three-dimensional viewing environment.
  • the line of section containing information regarding the adjusted navigation line, appended attributes, and at least some characteristics of the subsurface structure image and dimensional views can be transformed into a format usable in three-dimensional bathymetry or topography applications.
  • Transforming a line of section into a three-dimensional viewing format can include analyzing the subsurface structure image along with its corresponding adjusted navigation line.
  • the line analysis module 218 can analyze the original format of the subsurface structure image and perform a column-wise conversion of pixel values into individual wiggle traces.
  • the format can be a SEG-Y file format.
  • FIG. 4 is a flowchart describing a process for importing, error-checking, updating, and converting a line of section using a graphical user interface according to one aspect of the disclosure. This process can include any examples as described by FIG. 3.
  • the computing device 200 can receive, via the communications port 206, an identified subsurface structure image with an associated location map or georeferencing coordinates.
  • the identified subsurface structure image can be received from various data sources, and can often contain a corresponding location map or georeferencing coordinates to help locate appropriate geographical coordinates where the subsurface structure image data was captured.
  • the identification process of subsurface structure images and corresponding maps and coordinates can be performed by an algorithm.
  • the computing device 200 receives an extracted location map identified in block 402 and georeferences the location map.
  • a navigation line can be geolocated when the navigation line is referenced in an accompanying map, at which point the location map can be extracted from various data sources and located at appropriate geographical coordinates in three-dimensional space.
  • the extracted location map may be incorrectly located as compared to its corresponding georeferencing coordinates, at which point the location map can be georeferenced to account for any coordinate referencing system issues.
  • the extracting and generation of geolocation navigation line location maps can be performed by an algorithm.
  • the computing device 200 receives a digitized navigation line.
  • a line corresponding to the navigation line can be drawn or traced conforming to the estimated measurement locations provided by the corresponding location map.
  • the computing device 200 can receive the digitized navigation line after the line is drawn or traced. This can provide the computing device with a navigation line and associated attributes, based on its georeferenced location map, for use in determining a more accurate line of section.
  • the digitization of the navigation line with respect to the location map can be performed by an algorithm.
  • Block 408 may be performed in conjunction with process described by block 406 when georeferencing coordinates are provided.
  • the computing device receives a navigation line that is geolocated using georeferencing coordinates.
  • georeferencing coordinates can be extracted or provided, they can be used to georeference the navigation line.
  • This navigation line georeferencing process may be more accurate than the process described by block 406.
  • extracting georeferencing coordinates and georeferencing a navigation line using those extracted coordinates can be performed by an algorithm
  • the computing device 200 stores the navigation line and associated attributes in the memory 208 to be used for additional processing.
  • the memory 208 can store the navigation line location, geometry, and associated attributes derived from various data sources and the processes described by blocks 404, 406, and 408 more accurately defining and locating the navigation line.
  • the associated attributes can include a range of depth measurements, a top depth, a base depth, an image type, information about the source of the recorded data, a measurement unit type (e.g., kilometers, milliseconds) , a geological age range, a domain type (e.g., depth, two-way-time), a navigation line length, and a navigation line geometry in some examples, an algorithm may extract and automatically store any attributes associated with a targeted navigation line.
  • the computing device 200 receives an appended subsurface structure image.
  • the subsurface structure image identified in block 402 can be imported from the memory 208 and appended to the navigation line location using the computing device 200.
  • the associated attributes of the navigation line stored in the process of block 410 can be applied to the subsurface structure image.
  • an algorithm may append the subsurface structure image to the navigation line location and apply associated attributes.
  • the computing device 200 receives an adjusted subsurface structure image after being cropped, rotated, or manipulated.
  • the appended subsurface structure image resulting from the process described by block 412 can be displayed by the display module 214 to allow for manipulation of the subsurface structure image.
  • the subsurface structure image can be cropped, rotated, vertically or horizontally stretched, or otherwise manipulated, including manipulation of any subsurface structure image attributes, in any fashion necessary to append the entirety of the subsurface structure image to the navigation line geometry.
  • the adjusted subsurface structure image and associated georeferencing characteristics can be stored in the memory 208 as a separate object corresponding to the navigation line.
  • the manipulation of the subsurface structure image can be performed by an algorithm.
  • the computing device loads a digital elevation model profile for the purpose of comparison against the subsurface structure image.
  • the subsurface structure image can represent bathymetry/topography data points.
  • a digital elevation model profile can overlay the subsurface structure image resulting from the process described by block 414.
  • Comparison of the digital elevation model profile against the bathymetry/topography delineated in the subsurface structure image can be used to determine if the location or geometry of the subsurface structure image corresponding to a navigation line is accurate. For example, a closer match of the digital elevation model profile to the bathymetry/topography delineated in the subsurface structure image can signify that the navigation line is closer to its appropriate location within the digitized georeferencing representation.
  • a less close match between the digital elevation model profile and the bathymetry/topography delineated in the subsurface structure image can signify that the navigation line is not as close to its appropriate location within the digitized georeferencing representation.
  • a less close match can result in the navigation line being adjusted to result in a closer match as described in block 418.
  • an algorithm can preemptively load a digital elevation model profile corresponding to the location of the subsurface structure image.
  • the computing device stores an adjusted navigation line resulting from adjusting the navigation line in response to the comparison described by block 416.
  • the location and geometry of the subsurface structure image can be continuously adjusted until a sufficient match between the subsurface structure image and the digital elevation model profile is achieved.
  • the user interface update module 216 can change the position of the navigation line in the plan view window. Changes to the navigation line via the cross-sectional view window can be depicted by the adjusted navigation line in the plan view window in real time via the user interface update module 216.
  • the user interface update module 216 can allow a user to witness a real time change in the position of the subsurface structure image in relation to the digital elevation model profile. An adjustment to the geometry or position of the subsurface structure image would correspond to a new overlay of digital elevation model profile elevations for comparison to the bathymetry/topography delineated in subsurface structure image.
  • the adjusted navigation line geometry can be stored in the memory 208 as further described by block 310. In some examples, an algorithm can be used to adjust the navigation line to determine the best match between the subsurface structure image and the digital elevation model profile.
  • the computing device 200 transforms the subsurface structure image and navigation line into a three-dimensional viewing format.
  • the process of block 420 can be similar to the process described by block 312 with the exception of being performed by the computing device 200.
  • the navigation line file can be stored in the memory 208 as a line of section.
  • the memory may store at least some characteristics of the subsurface structure image and cross-sectional and plan views in relation to the stored line of section.
  • an algorithm may transform the navigation line and at least some characteristics of the subsurface structure image into a three-dimensional viewing format after determining that the best match between the bathymetry/topography delineated in the subsurface structure image and the digital elevation model profile has been achieved as described by block 418.
  • the computing device 200 imports line of section stored as a three-dimensional viewing format into a three-dimensional viewing environment.
  • the computing device 200 can load the transformed line of section file into a bathymetry or topography application.
  • the three-dimensional viewing environment application can immediately import and use the transformed line of section data.
  • FIG. 5 is a display of a graphical user interface including a digitized georeferencing representation in a plan view and a line of section data capture form according to one example.
  • the graphical user interface 502 can be proprietary software such as ESRI ArcMap used for bathymetry and topography applications.
  • the graphical user interface 502 can include any number of icons, buttons, menu options, and the like to implement the examples.
  • the navigation line 508 can be depicted within the digitized georeferencing representation 506.
  • the digitized georeferencing representation 506 can depict multiple navigation lines at any given time.
  • the Line of Section data capture form 504 can be a custom non-proprietary software extension that operates within the proprietary graphical user interface 502 application framework.
  • the Line of Section data capture form 504 can include any number of icons, buttons, menu options, and the like to implement the examples.
  • FIG. 6 is a display of a graphical user interface including a window for importing a cross-sectional image and its associated attributes according to one example.
  • the Save New X-Section Image window 602 can be used to capture the associated attributes of a navigation line.
  • the Save New X-Section Image window 602 can allow for importation of a subsurface structure image corresponding to the navigation line and its associated attributes according to the processes defined by block 410.
  • the Save New X-Section Image window 602 can store the associated attributes and subsurface structure image in the memory 208.
  • the Save New X- Section Image window 602 can include any number of icons, buttons, menu options, and the like to implement the examples.
  • the Save New X-Section Image window 602 can be accessed from the Line of Section data capture form 504.
  • FIG. 7 is a display of a graphical user interface including a window for editing a cross-sectional image and its associated attributes according to one example.
  • FIG. 7 can represent the same example interface as shown in FIG. 6, but populated with data and in edit mode.
  • the Edit X-Section Image Details window 702 can be used to edit the associated attributes and subsurface structure image corresponding to a navigation line originally stored via the Save New X-Section Image window 602.
  • the associated attributes and subsurface structure image edited within the Edit X-Section Image Details window 702 can be stored in the memory 208.
  • the Edit X-Section Image Details window 702 can include any number of icons, buttons, menu options, and the like to implement the examples.
  • the Edit X- Section Image Details window 702 can be accessed from the Line of Section data capture form 504.
  • FIG. 8 is a display of a graphical user interface including an image editing window for cropping, rotating, stretching, or otherwise manipulating a cross- sectional view of a subsurface structure image according to one example.
  • the Image Editing Window 802 can display an editable subsurface structure image 804.
  • the Image Editing Window 802 can perform the process as described by block 414 for cropping, rotating, stretching, shrinking, and otherwise manipulating the editable subsurface structure image 804.
  • the edited subsurface structure image can be displayed within the Edit X- Section Image Details window 702 where it can be compared to the stored original subsurface structure image.
  • the Image Editing Window 802 can include any number of icons, buttons, menu options, and the like to implement the examples.
  • the Image Editing Window 802 can be accessed from the Edit X-Section image Details window 702.
  • FIG. 9 is a display of a graphical user interface including a cross- sectional view of a subsurface structure image including bathymetry/topography data points for comparison with a digital elevation module profile according to one example.
  • the Compare X-Section bathy/topo with DEM profile at LoS location window 902 can compare the subsurface structure image 906 to a digital elevation model profile line 904.
  • the digital elevation model profile line 904 can correspond to elevations stored in digital elevation model profile at which a navigation line is positioned.
  • the Compare X-Section bathy/topo with DEM profile at LoS location window 902 can allow for adjustment of the navigation line similar to the second window as described by block 306 and further described by block 416.
  • Editing tools 908 located within the Compare X-Section bathy/topo with DEM profile at LoS location window 902 can be used to adjust the position and geometry of the navigation line in the first window (e.g., FIG. 5).
  • the Editing tools 908 can be used to then load the digital elevation model profile at the adjusted navigation line location to update the digital elevation model profile line for additional comparison against the bathymetry/topography delineated in the subsurface structure image.
  • the Compare X-Section bathy/topo with DEM profile at LoS location window 902 can include any number of icons, buttons, menu options, and the like to implement the examples.
  • the Compare X-Section bathy/topo with DEM profile at LoS location window 902 can be accessed from the Line of Section data capture form 504.
  • FIG. 10 is a display of a graphical user interface including a digitized georeferencing representation in a plan view that is updated based on a comparison of a subsurface structure image and digital elevation model profile in a cross- sectional view according to one example.
  • the Compare X-Section bathy/topo with DEM profile at LoS location window 902 can be juxtaposed against the graphical user interface 502, which can allow a user to observe real time adjustment of the navigation line.
  • the original navigation line 508 shown in the graphical user interface 502 can be adjusted via the Compare X-Section bathy/topo with DEM profile at LoS location window 902, as described by blocks 306, 308, and 418.
  • the original navigation line 508 can be automatically adjusted within the graphical user interface 502 and represented as an adjusted navigation line 1002.
  • the Compare X- Section bathy/topo with DEM profile at LoS location window 902 can include a redraw profile button, which can send the geometry of the adjusted navigation line 1002 to the digital elevation model profile, returning a new elevation profile.
  • the new elevation profile corresponding to the adjusted navigation line 1002 can be superimposed against the subsurface structure image in the Compare X-Section bathy/topo with DEM profile at LoS location window 902.
  • FIG. 11 is a display of a graphical user interface including a revised line of section data capture form according to one example.
  • the Line of Section data capture form 1102 can be the same form as Line of Section data capture form 504. This immediate example depicts a Line of Section data capture form 1102 that includes more than one cross-sectional image. A single line of section as defined by the Line of Section data capture form 1102 may include more than one cross- sectional image.
  • systems, devices, and methods for importing, error- checking, updating, and converting a line of section using a graphical user interface are provided according to one or more of the following examples:
  • any reference to a series of examples is to be understood as a reference to each of those examples disjunctively (e.g., "Examples 1-4" is to be understood as “Examples 1 , 2, 3, or 4").
  • Example 1 is a computer-implemented method comprising: displaying, in a first window of a graphical user interface, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generating, in a second window, a cross-sectionai view of depths of a formation, the cross-sectional view including a subsurface structure image comprising measured depth information for an area; comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, changing the position of the navigation line in the first window to form an adjusted navigation line; storing at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transforming the line of section into a format for a three-dimensional viewing environment.
  • Example 2 is the computer-implemented method of example 1 , wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
  • Example 3 is the computer-implemented method of example 1 , wherein comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line includes adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
  • Example 4 is the computer-implemented method of example 1 , wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
  • the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
  • Example 5 is the computer-implemented method of example 1 , wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
  • Example 6 is the computer-implemented method of example 1 , further comprising: receiving an associated location map or georeferencing coordinates for the cross-sectional view; and determining an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
  • Example 7 is the computer-implemented method of example 1 , wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
  • Example 8 is a system comprising: a processing device; a digital displaying device; a memory device; and a non-transitory computer-readable medium including code that is executable by the processing device to: display, in a first window of a graphical user interface via the digital displaying device, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generate, in a second window viewable by the digital displaying device, a cross-sectional view of depths of a formation, the cross-sectional view including a subsurface structure image comprising measured depth information for an area; compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, change the position of the navigation line in the first window to form an adjusted navigation line; store, in the memory device, at least some characteristics of the cross-
  • Example 9 is the system of example 8, wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
  • Example 10 is the system of example 8, the non-transitory computer- readable medium including code executable by the processing device to compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line further includes adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
  • Example 11 is the system of example 8, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
  • the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
  • Example 12 is the system of example 8, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
  • Example 13 is the system of example 8, the non-transitory computer- readable medium further including code that is executable by the processing device to: receive an associated location map or georeferencing coordinates for the cross- sectional view; and determine an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
  • Example 14 is the system of example 8, wherein the format for a three- dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
  • Example 15 is a non-transitory computer-readable medium that includes instructions that are executable by a processing device to: display, in a first window of a graphical user interface, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generate, in a second window, a cross-sectional view of depths of a formation , the cross-sectional view including a subsurface structure image comprising measured depth information for an area; compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, change the position of the navigation line in the first window to form an adjusted navigation line; store at least some characteristics of the cross- sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transform the line of section into a format for
  • Example 16 is the non-transitory computer-readable medium of example 15, wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data- acquiring vessel acquired the measured depth information of the subsurface structure image.
  • Example 17 is the non-transitory computer-readable medium of example 15, wherein the instructions that are executable by the processing device to compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line further include adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
  • Example 18 is the non-transitory computer-readable medium of example 15, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry, and wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
  • the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry
  • the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
  • Example 19 is the non-transitory computer-readable medium of example 15, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
  • Example 20 is the non-transitory computer-readable medium of example 15, wherein the instructions are executable by the processing device to: receive an associated location map or georeferencing coordinates for the cross- sectional view; and determine an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
  • Example 21 is a computer-implemented method comprising: displaying, in a first window of a graphical user interface, a navigation line of a data- acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generating, in a second window, a cross-sectional view of depths of a formation, the cross-sectional view including a subsurface structure image comprising measured depth information for an area; comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, changing the position of the navigation line in the first window to form an adjusted navigation line; storing at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transforming the line of section into a format for a three-dimensional viewing environment.
  • Example 22 is the computer-implemented method of example 21 , wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
  • Example 23 is the computer-implemented method of any of examples 21 to 22, wherein comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line includes adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
  • Example 24 is the computer-implemented method of any of examples 21 to 23, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
  • the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
  • Example 25 is the computer-implemented method of any of examples 21 to 24, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
  • Example 26 is the computer-implemented method of any of examples 21 to 25, further comprising: receiving an associated location map or georeferencing coordinates for the cross-sectional view; and determining an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
  • Example 27 is the computer-implemented method of any of examples 21 to 26, wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
  • Example 28 is a non-transitory computer-readable medium that includes instructions that are executable by a processing device to: display, in a first window of a graphical user interface, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generate, in a second window, a cross-sectional view of depths of a formation , the cross-sectional view including a subsurface structure image comprising measured depth information for an area; compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, change the position of the navigation line in the first window to form an adjusted navigation line; store at least some characteristics of the cross- sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transform the line of section into a format for
  • Example 29 is the non-transitory computer-readable medium of example 28, wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data- acquiring vessel acquired the measured depth information of the subsurface structure image.
  • Example 30 is the non-transitory computer-readable medium of any of examples 28 to 29, wherein the instructions that are executable by the processing device to compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line further include adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
  • Example 31 is the non-transitory computer-readable medium of any of examples 28 to 30, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
  • the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
  • Example 32 is the non-transitory computer-readable medium of any of examples 28 to 31wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
  • Example 33 is the non-transitory computer-readable medium of any of examples 28 to 32, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
  • Example 34 is the non-transitory computer-readable medium of any of examples 28 to 33, wherein the instructions are executable by the processing device to: receive an associated location map or georeferencing coordinates for the cross- sectional view; and determine an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
  • Example 35 is the non-transitory computer-readable medium of any of examples 28 to 34, wherein the non-transitory computer-readable medium is in a system that comprises: the processing device; a digital displaying device for displaying the first window and the second window; and a memory device for storing at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • Hydrology & Water Resources (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Acoustics & Sound (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)

Abstract

In a first window, a graphical user interface can display a navigation line in a plan view of a digitized georeferencing representation in a plan view. In a second window, the graphical user interface can generate a cross-sectional view of depths of a formation. The cross-sectional view can be compared to a digital elevation model profile to determine an adjustment to the navigation line. In response to automatically detecting an adjustment to the navigation line in the second window, the graphical user interface can change the position of the navigation line in the first window to form an adjusted navigation line. At least some characteristics of the cross-sectional view and the plan view can be stored along with the adjusted navigation line as a line of section. The line of section can be transformed into a three-dimensional viewing format.

Description

AUTOMATICALLY UPDATING A GEOREFERENCING GRAPHICAL USER INTERFACE
FOR NAVIGATION LINE ADJUSTMENTS
Technical Field
[0001] The present disclosure relates generally to systems and methods for use in a graphical user interface. More specifically, but not by way of limitation, this disclosure relates to automatically updating a georeferencing graphical user interface for navigation line adjustments.
Background
[0002] There is an abundant amount of geological data available that can be mapped within a graphical user interface and applied to data poor areas, which can be vital for understanding a subsurface structure. Graphical user interfaces can be used to identify and map virtualized representations relating to subsurface structures using the available geological data. However, the exact geographic position and orientation of these subsurface data points on the Earth’s surface can often have associated uncertainties, and typically cannot be mapped directly to their corresponding locations within the graphical user interface without error or repeated manipulation of the mapped data points. As a result, significant time is wasted each time a user is forced to save, load, import, reload, or open a window within a georeferencing graphical user interface to affect a corresponding change performed in another window.
Brief Description of the Drawings
[0003] FIG. 1 is a contextual view of an example of a data-acquiring system gathering data points for a subsurface structure image according to one aspect of the disclosure.
[0004] FIG. 2 is a block diagram of an example of a computing device usable for executing program code for using dual views of a geographic feature in both plan and cross-sectional views generated using data to modify a navigation line to form a line of section according to one aspect of the disclosure.
[0005] FIG. 3 is a flowchart describing a process for automatically updating a position in a navigation line in a first window in response to a change to the navigation line in a second window according to one aspect of the disclosure. [0006] FIG. 4 is a flowchart describing a process for importing, error-checking, updating, and converting a line of section using a graphical user interface according to one aspect of the disclosure.
[0007] FIG. 5 is a display of an example of a graphical user interface including a digitized georeferencing representation in a plan view and a line of section data capture form according to one aspect of the disclosure.
[0008] FIG. 6 is a display of an example of a graphical user interface including a window for importing a cross-sectional image and its associated attributes according to one aspect of the disclosure.
[0009] FIG. 7 is a display of an example of a graphical user interface including a window for editing a cross-sectional image and its associated attributes according to one aspect of the disclosure.
[0010] FIG. 8 is a display of an example of a graphical user interface including an image editing window for cropping and rotating a cross-sectional view of a subsurface structure image according to one aspect of the disclosure.
[0011] FIG. 9 is a display of an example of a graphical user interface including a cross-sectional view of a subsurface structure image for comparison with a digital elevation module profile according to one aspect of the disclosure.
[0012] FIG. 10 is a display of an example of a graphical user interface including a digitized georeferencing representation in a plan view that is updated based on a comparison of a subsurface structure image and digital elevation module according to one aspect of the disclosure.
[0013] FIG. 11 is a display of an example of a graphical user interface including a revised line of section data capture form according to one aspect of the disclosure.
Detailed Description
[0014] Certain aspects and features relate to using dual views of a geographic feature in both plan and cross-sectional views generated using data to modify a navigation line to form and position a line of section. These aspects and features relate to providing a system and method for updating one window of a graphical user interface displaying a plan view of a navigation line in response to automatically detecting an adjustment to the navigation line in another window shown as a cross- sectional view. The process can include accurately and efficiently mapping, automatically adjusting, transforming, and storing subsurface structure attributes and corresponding raster images in a single database to reduce uncertainty in bathymetry and topography applications where raw georeferencing data is scarce or has been lost.
[0015] A graphical user interface can contain user-viewable windows for displaying a navigation line in various formats and with respect to different formation representations and profiles. A navigation line corresponding to the traversed path of a data-acquiring system, such as a vessel in the form of a boat, can be displayed in a window of the graphical user interface as a plan view of a digitized georeferencing representation. The digitized georeferencing representation can be defined by attributes and associated with subsurface structure images corresponding to measured data points acquired by the data-acquiring system along the navigation line. In a separate window, the graphical user interface can generate a cross- sectional view including the subsurface structure image corresponding to the navigation line depicted within the digitized georeferencing representation (e.g., FIG. 9). In some examples, the cross-sectional view can allow for comparison of the subsurface structure image with the elevations corresponding to the navigation line position for purposes of adjusting the position of the subsurface structure image to more accurately append to the elevation profile. In some examples, the cross- sectional view can further allow for comparison of the navigation line and appended subsurface structure image against a digital elevation model profile for purposes of more accurately locating the navigation line, and the navigations line’s corresponding attributes and subsurface structure image, with respect to its relative position in three-dimensional space shown by the digitized georeferencing representation (e.g., FIG 10). The geometry (e.g., associated attributes) of the navigation line can be adjusted in terms of orientation, location, and line length when comparing the navigation line to the subsurface structure image in the cross-sectional view window.
[0016] In response to automatically detecting an adjustment to the navigation line when compared against the subsurface structure image or the digital elevation model profile in the cross-sectional view window, the graphical user interface can adjust the position and orientation of the navigation line in the other window displaying a plan view of the digitized georeferencing representation. Once the most accurate location and orientation of the navigation line has been geolocated with respect to the digital elevation model profile, the graphical user interface can store the adjusted navigation line as a line of section. The line of section can then be transformed into an appropriate format to be used within a three-dimensional viewing environment (e.g., SEG-Y).
[0017] Being able to locate a subsurface structure image corresponding to a navigation line more accurately within a three-dimensional space can improve time- saving efficiencies by preemptively reducing error at early processing stages, and can increase the quality, accuracy, and confidence of decision-making processes based on the navigation lines. Updating the position of a navigation line in a cross- sectional view window in response to automatically detecting an adjustment to a position or orientation of the navigation line in a plan view window can provide improved accuracy of the adjusted navigation line with respect to its real-world attributes and corresponding subsurface structure image for use in bathymetry and topography applications.
[0018] Comparatively, conventional methods of manually adjusting a navigation line in a plan view without automatically detecting adjustments made in a cross-sectional view window may not accurately represent those adjustments, and therefore may be erroneous with respect to its actual real-world location. In a conventional error-correction method, minimizing error resulting from manual adjustment of the plan view navigation line can require duplicated efforts involving continuous reloading of the cross-sectional view window after making each navigation line adjustment in the plan view window. Making manual navigation iine adjustments in the plan view window and reloading the cross-section view window to verify the accuracy of the digital elevation model profile against the subsurface structure image corresponding to the navigation line can be a time-consuming and inaccurate process. In some embodiments of the present invention, the automatic adjustment of the navigation line in the plan view can provide a reduction in processing time by avoiding manually repeating the loading process, thus improving the efficiency of the computer-implemented process and giving a user real-time efficiency gains.
[0019] The aforementioned example (i.e. automatically detecting an adjustment to a navigation line in a cross-sectional view in a second graphical user interface window and updating the position and orientation of the navigation line in a plan view of a first graphical user interface window in response to the detected adjustment) can be a portion of a more detailed computer-implemented workflow. This workflow, which can include the automatic navigation line adjustment, can standardize the processes from (i) importing a subsurface structure image with associated attributes to (ii) using the line of section in its three-dimensional viewing format.
[0020] In some examples, the computer-implemented workflow may include steps such as (i) identifying a subsurface structure image from a data source (e.g., database 210 of FIG. 2), (ii) extracting georeferencing coordinates from the data source, digitizing or georeferencing the navigation line, (iii) storing the navigation line and its associated attributes, (iv) appending the subsurface structure image to the navigation line via geometric manipulation (e.g., cropping, rotating, stretching, shrinking, etc.), and (v) loading a digital elevation model profile. The workflow implementing the automatic navigation line adjustment can accurately constrain the navigation line data as much as possible prior to use in a three-dimensional viewing application, thus removing any duplication of effort within the viewing application. Comparatively, conventional methods of error checking may not include accurately constraining the navigation line data prior to loading in the three-dimensional viewing application, and can therefore require the time-consuming process of constantly reloading the application upon finding and adjusting each inaccurate navigation line or navigation line attributes. This example of a workflow implementing automatic navigation line adjustment can allow for data to be clearly organized, stored, standardized, and checked for quality prior to utilization so that the data can be used effectively in a subsurface modelling application.
[0021] In some examples, a navigation line can be a line corresponding to a traversed pathway or journey of a data-acquiring system during which the system recorded data about the bathymetry and/or topography of a surface. The navigation line can be linear, relatively linear, or of any geometry resulting from implementation of any conventional bathymetric and topographic measuring methods. The navigation line can include any associated attributes related to the recorded bathymetric and topographic data including a range of depth measurements, a top depth, a base depth, an image type, information about the source of the recorded data, a measurement unit type, a geological age range, a domain type, a navigation line length, and a navigation line geometry. In some examples, the navigation line can be associated with a corresponding location map or georeferencing coordinates, which can be used to locate the navigation line at a location or series of locations or coordinates within three-dimensional space. In some examples, the attributes, location map, and georeferencing coordinates can be retrieved from various data sources (e.g., a database). The retrieved attributes, location map, or georeferencing coordinates may not be accurate with respect to the real-world locations at which the data-acquiring system recorded data, and the data recorded at those locations may not be accurate.
[0022] In some examples, the attributes, location map, or georeferencing coordinates associated with the navigation line can be used to digitize the line into a software application or graphical user interface (e.g., ESRI ArcGiS/ArcMap) to create a digitized georeferenced representation of the line within a digitized georeferencing representation. The location map or georeferencing coordinates can be used to identify an initial location of the navigation line within the digitized georeferencing representation, and the attributes of the navigation line can further define the navigation line at those identified navigation line locations. In some examples, the digitized georeferencing representation can be a plan view (e.g., map view) of a location or area in three-dimensional space. The digitized georeferencing representation can include colors, shading, or any other conventional markings to represent a range of disparate elevations across a surface. Elevation measurements recorded by a data-acquiring system can correspond to the relative elevations shown at the location or series of locations of the digitized georeferenced navigation line within the digitized georeferencing representation.
[0023] In some examples, the navigation line can be associated with a corresponding subsurface structure image. A subsurface structure image can be a two-dimensional vertical image representing a range of depths measured along the pathway of a data-acquiring system. The range of depths depicted by a subsurface structure image can be represented by one or more sets of elevation lines defining a bathymetry of a surface or multiple surfaces. The navigation line can correspond to the pathway of the data-acquiring system.
[0024] In some examples, a subsurface structure image and navigation line can be compared to a digital elevation model profile for purposes of matching the navigation line and corresponding subsurface structure image to the digital elevation model profile. A subsurface structure image can include elevation measurement data for the length of the navigation line. The position or orientation of the navigation line can be adjusted to better conform to the digital elevation model profile, which can result in the attributes and the digitized georeferenced location or locations of the navigation line being appended to represent the adjusted navigation line. The adjusted navigation line can more accurately define the locations at which the data- acquiring system acquired the depth measurements than the original navigation line. In some examples, the digital elevation model profile can be a three-dimensional computer graphics representation of a terrain or underwater surface including a range of elevations across an area. A subsurface structure image can correspond to a two-dimensional representation of a range of depths that can be geolocated within three-dimensional space using the digital elevation model profile. As such, a three- dimensional viewing environment can include multiple data points corresponding to multiple subsurface structure images.
[0025] In some examples, a line of section can be the resulting pairing of a subsurface structure image with a corresponding navigation line. A line of section can include information relating to both a navigation line and its corresponding subsurface structure image, which can be stored and used within a three- dimensional viewing environment in further bathymetry or topography applications.
[0026] In some examples, the three-dimensional viewing environment can implement a SEG-Y file format, which is a standardized file format for storing geophysical data.
[0027] These illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. The following sections describe various additional features and examples with reference to the drawings in which like numerals indicate like elements, and directional descriptions are used to describe the illustrative aspects but, like the illustrative aspects, should not be used to limit the present disclosure.
[0028] FIG. 1 is a contextual view depicting a data-acquiring system gathering data points for a subsurface structure image according to one example. In this example, the data-acquiring system 102 can travel over a subsurface 108 while simultaneously measuring elevation data points corresponding to the subsurface 108 with a data collection device 104. The data-acquiring system 102 can also measure data points corresponding to one or more subsurfaces (e.g., subsurface 110) located beneath the subsurface 108. Primary functions of the data-acquiring system 102 can include determining location, geometry, and other attributes of subsurface structures (e.g., subsurface formations). In some example where the data-acquiring system 102 is performing a primary function, the data-acquiring system 102 can measure data points and features related to bathymetry and topography measurements.
[0029] The data-acquiring system 102 can be any conventional guided or controllable tool or vessel implemented by conventional methods for measuring bathymetry data. In this example, the data acquiring vessel can be a ship including the data collection device 104. The data collection device 104 can be any device that can implement any conventional method for measuring bathymetry data. The data collection device 104 can implement methods for measuring bathymetry data including depth sounding, sonar, LIDAR/LADAR, and single-beam or multi-beam echosounding. In some examples, other conventional methods for subsurface elevation data collection, such as satellite radar, can be used in place of the data acquiring vessel 102 and the data collection device 104. In some examples, the aforementioned example methods for measuring bathymetry data can be used to measure data points useable for generating a digital elevation model profile.
[0030] In this example, data collection device 104 can transmit feedback waves 106. The feedback waves 106 can be sound or light waves used to determine the depths of the subsurface 108 below the data-acquiring system 102. The feedback waves 106 can be emitted by the data collection device 104 and traverse the water until reaching an object of significant size and density. Upon reaching an object of significant size and density, such as the subsurface 108, the feedback waves 106 can be reflected by the subsurface 108. After being reflected by the subsurface 108, the feedback waves can traverse the water back to the data collection device 104. The data collection device 104 can determine the depths of the subsurface 108 at any given location by determining the total travel time, or two- way-time, from initially sending feedback waves 106 to receiving feedback waves 106 after being reflected by the subsurface 108. A longer two-way-time can represent a larger depth value (i.e. a lower elevation level) compared to a shorter two-way-time representing a smaller depth value (i.e. a higher elevation level). Using multiple depth measurements, the data collection unit 104 can construct a subsurface structure image representing the depths at each measurement point along the navigation line of the data-acquiring system 102.
[0031] FIG. 2 is a block diagram of a computing device 200 usable for executing program code for using dual views of a geographic feature in both plan and cross-section views generated using data to modify a navigation line to form a line of section according to one example.
[0032] The computing device 200 can include a processor 202, a bus 204, a communications port 206, and a memory 208. In some examples, the components shown in FIG. 2 (e.g., the processor 202, the bus 204, the communications port 206, and the memory 208) can be integrated into a single structure. For example, the components can be within a single housing. In other examples, the components shown in FIG. 2 can be distributed (e.g., in separate housings) and in electrical communication with each other.
[0033] The processor 202 can execute one or more operations for implementing some examples. The processor 202 can execute instructions stored in the memory 208 to perform the operations. The processor 202 can include one processing device or multiple processing devices. Non-limiting examples of the processor 202 include a Field-Programmable Gate Array (“FPGA”), an application- specific integrated circuit (“ASIC”), a microprocessor, etc.
[0034] The processor 202 can be communicatively coupled to the memory 208 via the bus 204. The non-volatile memory 208 may include any type of memory device that retains stored information when powered off. Non-limiting examples of the memory 208 include electrically erasable and programmable read-only memory (“EEPROM”), flash memory, or any other type of non-volatile memory. In some examples, at least some of the memory 208 can include a medium from which the processor 202 can read instructions. A computer-readable medium can include electronic, optical, magnetic, or other storage devices capable of providing the processor 202 with computer-readable instructions or other program code. Non- limiting examples of a computer-readable medium include (but are not limited to) magnetic disk(s), memory chip(s), ROM, random-access memory (“RAM”), an ASIC, a configured processor, optical storage, or any other medium from which a computer processor can read instructions. The instructions can include processor-specific instructions generated by a compiler or an interpreter from code written in any suitable computer-programming language, including, for example, C, C++, C#, etc.
[0035] The database 210 can include information related to bathymetry data including location maps, georeferencing coordinates, navigation line attributes, and subsurface structure images. Communications port 206 can interface with the database 210 to transfer location maps, georeferencing coordinates, navigation line attributes, or subsurface structure images to the computing device 200. Bathymetry data received by the communications port 206 can be transmitted to the memory 208 via the bus 204. The memory 208 can store any received bathymetry data received and any data relating to a line of section for implementing some examples. The memory 208 can store at least some characteristics of the cross-sectional and plan views from each window along with their associated attributes.
[0036] The memory 208 can include program code for a user interface generation module 212, a display module 214, a user interface update module 216, and a line analysis module 218. The user interface generation module 212 can generate non-proprietary user interface windows within a proprietary graphical user interface architecture in preparation of displaying the interface windows to a user. The display module 214 can output the interface windows generated by user interface generation module 212 to a user along with any associated proprietary interface windows. The user interface update module 216 can update information or visual representations in one or more interface windows in response to automatically detecting a change in information (e.g., adjustment to the position of a navigation line) in another interface window. The line analysis module 218 can detect a change in the position or geometry of a navigation line for the purpose of activating the user interface update module 216.
[0037] FIG. 3 is a flowchart describing a process for automatically updating a position in a navigation line in a first window in response to a change to the navigation line in a second window according to one aspect of the disclosure.
[0038] In block 302, the display module 214 displays a first window of a user interface as a plan view. The user interface in the first window can be a digitized georeferencing representation that can include a digitized georeferenced navigation line representing a real-world navigation line. The first window containing the digitized georeferenced navigation line within the digitized georeferencing representation in a plan view can depict the relative elevations across which a data acquiring system can record bathymetric and topographic information (e.g., the graphical user interface 502 of FIG. 5, where the navigation line 508 crosses various elevations shown in the plan view).
[0039] The digitized georeferenced navigation line can be generated using or derived from bathymetry data measured by a data-acquiring system (e.g., data- acquiring system 102 of FIG. 1 ). Measurements and coordinates derived from the data-acquiring system may not accurately represent the real-world measurement locations and may only describe estimated measurement locations. As such, additional data points from various data sources (e.g., database 210 of FIG. 2) can be used to more precisely and accurately identify the locations representing the navigation line (i.e. the identified locations of the navigation line can be used to update the corresponding depiction of the digitized navigation line within the digitized georeferencing representation). Any relevant attributes corresponding to the navigation line can be linked to the digitized georeferenced navigation line such that those attributes can be adjusted to alter the characteristics of the digitized georeferenced navigation line (e.g., domain type, line location, line geometry) in a separate window.
[0040] In block 304, the user interface generation module 212 generates and then displays, via the display module 214, a second window of the user interface as a cross-sectional view (e.g., FIG. 9). The cross-sectional view in the second window can include a subsurface structure image and a digital elevation model profile line corresponding to the navigation line. The digital elevation model profile line can be a two-dimensional representation of at least some elevations stored in a three- dimensional digital elevation model profile. The elevations represented by the digital elevation model profile line can correspond to the elevations at the locations (e.g., pathway) of the navigation line.
[0041] After generating the cross-sectional view via the user interface generation module 212, the resulting graphical user interface can display, via display module 214, two separate user interface windows: (i) a plan view interface depicting a navigation line (e.g., FIG. 5), and (ii) a cross-sectional view interface depicting a subsurface structure image and the digital elevation model profile line corresponding to the navigation line. The digital elevation model profile line of the cross-sectional view in the second window can represent the elevation levels at the position of the navigation line in the first window.
[0042] In some examples, the graphical user interface can generate, in a separate view of the same window, a cross-sectional view including the subsurface structure image corresponding to the digitized georeferencing representation and the navigation line.
[0043] In block 306, the line analysis module 218 automatically detects an adjustment to the position of the navigation line by comparing the cross-sectional view to a digital elevation model profile. A digital elevation model profile can be loaded and superimposed as a digital elevation model profile line against the cross- sectional view including a subsurface structure image in the second window. This can allow for comparison of the digital elevation model profile against the bathymetry/topography of the subsurface structure image corresponding to the navigation line for the purposes of detecting an adjustment to the navigation line. The digital elevation model profile can contain three-dimensional elevation measurements encompassing the locations at which the data-acquiring system recorded the depth measurements. The second window can include the subsurface structure image corresponding to the navigation line, where the subsurface structure image includes bathymetry/topography data points. In the second window, the digital elevation model profile can inlay its subsurface elevation measurements as a digital elevation model profile line corresponding to the position of the navigation line in the first window.
[0044] The bathymetry/topography of the subsurface structure image corresponding to the navigation line and digital elevation model profile line corresponding to the position of the navigation line can be compared in the second window. Comparing the subsurface structure image to the digital elevation model profile line can be performed to determine if the navigation line is positioned in the digitized georeferencing representation to accurately represent its location in the real world. A closer match of the digital elevation model profile line to the subsurface structure image can signify that the navigation line is closer to its appropriate location within the digitized georeferencing representation. A less close match between the digital elevation model profile line and the subsurface structure image can signify that the navigation line is not as close to its appropriate location within the digitized georeferencing representation. A less close match can result in the navigation line being adjusted to result in a closer match within the second window.
[0045] The navigation line and associated attributes can be derived from various data sources and may therefore not be accurate with respect to the navigation line’s real-world position. This adjustment to the navigation line in the second window resulting from the comparison can be performed in order to more accurately geolocate the navigation line, therefore improving the accuracy of the navigation line and corresponding attributes when stored and used within a bathymetry/topography application database or other three-dimensional viewing environment. The line analysis module 218 can automatically detect a change in the position or orientation of the navigation line in the second window.
[0046] In block 308, the user interface update module 216 changes the position of the navigation line in the first window in response to automatically detecting, via the line analysis module 218, an adjustment to the position of the navigation line via the second window. After automatically detecting a change in the position or orientation of the navigation line as a result of the comparison between the digital elevation model profile line and the subsurface structure image in the second window, the user interface update module 216 can change the position of the navigation line in the first window displaying the plan view. The change in the position or orientation of the navigation line in the first window can produce an adjusted navigation line, which can include appended attributes. The appended attributes of the adjusted navigation line can directly correspond to the adjustments that were made to the original navigation line via the second window, such that the attributes of the original navigation line differ from the attributes of the adjusted navigation line.
[0047] In block 310, at least some characteristics of the cross-sectional view of the subsurface structure image and the plan view of the digitized georeferencing representation, along with attributes corresponding to the adjusted navigation line and the adjusted navigation line are stored as a line of section within the memory 208. In some examples, the adjusted navigation line and associated appended attributes may be stored as a line of section without also storing the cross-sectional view of the second window or the plan view from the first window. Instead of storing image captures of the cross-sectional and plan views, the associated appended attributes of the navigation line can inherently include adjusted reference points and updated georeferencing coordinates that can be used to redraw the navigation line in the plan view at its appropriate location and with its appropriate line geometry. In other words, the adjusted navigation line with appended attributes can be stored as a line of section separate from the stored subsurface structure image. The subsurface structure image corresponding to the navigation line can be stored prior to and in preparation of the processes described in FIG. 3.
[0048] In block 312, the line of section is transformed into a format to be used within a three-dimensional viewing environment. The line of section containing information regarding the adjusted navigation line, appended attributes, and at least some characteristics of the subsurface structure image and dimensional views can be transformed into a format usable in three-dimensional bathymetry or topography applications. Transforming a line of section into a three-dimensional viewing format can include analyzing the subsurface structure image along with its corresponding adjusted navigation line. The line analysis module 218 can analyze the original format of the subsurface structure image and perform a column-wise conversion of pixel values into individual wiggle traces. The format can be a SEG-Y file format.
[0049] FIG. 4 is a flowchart describing a process for importing, error-checking, updating, and converting a line of section using a graphical user interface according to one aspect of the disclosure. This process can include any examples as described by FIG. 3.
[0050] In block 402, the computing device 200 can receive, via the communications port 206, an identified subsurface structure image with an associated location map or georeferencing coordinates. The identified subsurface structure image can be received from various data sources, and can often contain a corresponding location map or georeferencing coordinates to help locate appropriate geographical coordinates where the subsurface structure image data was captured. In some examples, the identification process of subsurface structure images and corresponding maps and coordinates can be performed by an algorithm.
[0051] In block 404, the computing device 200 receives an extracted location map identified in block 402 and georeferences the location map. A navigation line can be geolocated when the navigation line is referenced in an accompanying map, at which point the location map can be extracted from various data sources and located at appropriate geographical coordinates in three-dimensional space. The extracted location map may be incorrectly located as compared to its corresponding georeferencing coordinates, at which point the location map can be georeferenced to account for any coordinate referencing system issues. In some examples, the extracting and generation of geolocation navigation line location maps can be performed by an algorithm.
[0052] In block 406, the computing device 200 receives a digitized navigation line. Within a digitized georeferencing representation shown in a user interface window as a plan view provided by the display module 214, a line corresponding to the navigation line can be drawn or traced conforming to the estimated measurement locations provided by the corresponding location map. The computing device 200 can receive the digitized navigation line after the line is drawn or traced. This can provide the computing device with a navigation line and associated attributes, based on its georeferenced location map, for use in determining a more accurate line of section. In some examples, the digitization of the navigation line with respect to the location map can be performed by an algorithm.
[0053] Block 408 may be performed in conjunction with process described by block 406 when georeferencing coordinates are provided. In block 408, the computing device receives a navigation line that is geolocated using georeferencing coordinates. When georeferencing coordinates can be extracted or provided, they can be used to georeference the navigation line. This navigation line georeferencing process may be more accurate than the process described by block 406. In some examples, extracting georeferencing coordinates and georeferencing a navigation line using those extracted coordinates can be performed by an algorithm,
[0054] In block 410, the computing device 200 stores the navigation line and associated attributes in the memory 208 to be used for additional processing. The memory 208 can store the navigation line location, geometry, and associated attributes derived from various data sources and the processes described by blocks 404, 406, and 408 more accurately defining and locating the navigation line. The associated attributes can include a range of depth measurements, a top depth, a base depth, an image type, information about the source of the recorded data, a measurement unit type (e.g., kilometers, milliseconds) , a geological age range, a domain type (e.g., depth, two-way-time), a navigation line length, and a navigation line geometry in some examples, an algorithm may extract and automatically store any attributes associated with a targeted navigation line.
[0055] In block 412, the computing device 200 receives an appended subsurface structure image. The subsurface structure image identified in block 402 can be imported from the memory 208 and appended to the navigation line location using the computing device 200. The associated attributes of the navigation line stored in the process of block 410 can be applied to the subsurface structure image. In some examples, an algorithm may append the subsurface structure image to the navigation line location and apply associated attributes.
[0056] In block 414, the computing device 200 receives an adjusted subsurface structure image after being cropped, rotated, or manipulated. The appended subsurface structure image resulting from the process described by block 412 can be displayed by the display module 214 to allow for manipulation of the subsurface structure image. The subsurface structure image can be cropped, rotated, vertically or horizontally stretched, or otherwise manipulated, including manipulation of any subsurface structure image attributes, in any fashion necessary to append the entirety of the subsurface structure image to the navigation line geometry. The adjusted subsurface structure image and associated georeferencing characteristics can be stored in the memory 208 as a separate object corresponding to the navigation line. In some examples, the manipulation of the subsurface structure image can be performed by an algorithm.
[0057] In block 416, the computing device loads a digital elevation model profile for the purpose of comparison against the subsurface structure image. The subsurface structure image can represent bathymetry/topography data points. In a cross-sectional view user interface window, a digital elevation model profile can overlay the subsurface structure image resulting from the process described by block 414. Comparison of the digital elevation model profile against the bathymetry/topography delineated in the subsurface structure image can be used to determine if the location or geometry of the subsurface structure image corresponding to a navigation line is accurate. For example, a closer match of the digital elevation model profile to the bathymetry/topography delineated in the subsurface structure image can signify that the navigation line is closer to its appropriate location within the digitized georeferencing representation. A less close match between the digital elevation model profile and the bathymetry/topography delineated in the subsurface structure image can signify that the navigation line is not as close to its appropriate location within the digitized georeferencing representation. A less close match can result in the navigation line being adjusted to result in a closer match as described in block 418. in some examples, an algorithm can preemptively load a digital elevation model profile corresponding to the location of the subsurface structure image.
[0058] In block 418, the computing device stores an adjusted navigation line resulting from adjusting the navigation line in response to the comparison described by block 416. The location and geometry of the subsurface structure image can be continuously adjusted until a sufficient match between the subsurface structure image and the digital elevation model profile is achieved. As previously described by block 308, after automatically detecting, via the line analysis module 218, an adjustment in the position or orientation of the navigation line in the cross-sectional view window, the user interface update module 216 can change the position of the navigation line in the plan view window. Changes to the navigation line via the cross-sectional view window can be depicted by the adjusted navigation line in the plan view window in real time via the user interface update module 216. The user interface update module 216 can allow a user to witness a real time change in the position of the subsurface structure image in relation to the digital elevation model profile. An adjustment to the geometry or position of the subsurface structure image would correspond to a new overlay of digital elevation model profile elevations for comparison to the bathymetry/topography delineated in subsurface structure image. The adjusted navigation line geometry can be stored in the memory 208 as further described by block 310. In some examples, an algorithm can be used to adjust the navigation line to determine the best match between the subsurface structure image and the digital elevation model profile.
[0059] In block 420, the computing device 200 transforms the subsurface structure image and navigation line into a three-dimensional viewing format. The process of block 420 can be similar to the process described by block 312 with the exception of being performed by the computing device 200. The navigation line file can be stored in the memory 208 as a line of section. The memory may store at least some characteristics of the subsurface structure image and cross-sectional and plan views in relation to the stored line of section. In some examples, an algorithm may transform the navigation line and at least some characteristics of the subsurface structure image into a three-dimensional viewing format after determining that the best match between the bathymetry/topography delineated in the subsurface structure image and the digital elevation model profile has been achieved as described by block 418.
[0060] In block 422, the computing device 200 imports line of section stored as a three-dimensional viewing format into a three-dimensional viewing environment. The computing device 200 can load the transformed line of section file into a bathymetry or topography application. The three-dimensional viewing environment application can immediately import and use the transformed line of section data.
[0061] FIG. 5 is a display of a graphical user interface including a digitized georeferencing representation in a plan view and a line of section data capture form according to one example. The graphical user interface 502 can be proprietary software such as ESRI ArcMap used for bathymetry and topography applications. The graphical user interface 502 can include any number of icons, buttons, menu options, and the like to implement the examples. The navigation line 508 can be depicted within the digitized georeferencing representation 506. The digitized georeferencing representation 506 can depict multiple navigation lines at any given time. The Line of Section data capture form 504 can be a custom non-proprietary software extension that operates within the proprietary graphical user interface 502 application framework. The Line of Section data capture form 504 can include any number of icons, buttons, menu options, and the like to implement the examples.
[0062] FIG. 6 is a display of a graphical user interface including a window for importing a cross-sectional image and its associated attributes according to one example. The Save New X-Section Image window 602 can be used to capture the associated attributes of a navigation line. The Save New X-Section Image window 602 can allow for importation of a subsurface structure image corresponding to the navigation line and its associated attributes according to the processes defined by block 410. The Save New X-Section Image window 602 can store the associated attributes and subsurface structure image in the memory 208. The Save New X- Section Image window 602 can include any number of icons, buttons, menu options, and the like to implement the examples. The Save New X-Section Image window 602 can be accessed from the Line of Section data capture form 504.
[0063] FIG. 7 is a display of a graphical user interface including a window for editing a cross-sectional image and its associated attributes according to one example. FIG. 7 can represent the same example interface as shown in FIG. 6, but populated with data and in edit mode. The Edit X-Section Image Details window 702 can be used to edit the associated attributes and subsurface structure image corresponding to a navigation line originally stored via the Save New X-Section Image window 602. The associated attributes and subsurface structure image edited within the Edit X-Section Image Details window 702 can be stored in the memory 208. The Edit X-Section Image Details window 702 can include any number of icons, buttons, menu options, and the like to implement the examples. The Edit X- Section Image Details window 702 can be accessed from the Line of Section data capture form 504.
[0064] FIG. 8 is a display of a graphical user interface including an image editing window for cropping, rotating, stretching, or otherwise manipulating a cross- sectional view of a subsurface structure image according to one example. The Image Editing Window 802 can display an editable subsurface structure image 804. The Image Editing Window 802 can perform the process as described by block 414 for cropping, rotating, stretching, shrinking, and otherwise manipulating the editable subsurface structure image 804. After performing the editing process according to block 414, the edited subsurface structure image can be displayed within the Edit X- Section Image Details window 702 where it can be compared to the stored original subsurface structure image. The Image Editing Window 802 can include any number of icons, buttons, menu options, and the like to implement the examples. The Image Editing Window 802 can be accessed from the Edit X-Section image Details window 702.
[0065] FIG. 9 is a display of a graphical user interface including a cross- sectional view of a subsurface structure image including bathymetry/topography data points for comparison with a digital elevation module profile according to one example. The Compare X-Section bathy/topo with DEM profile at LoS location window 902 can compare the subsurface structure image 906 to a digital elevation model profile line 904. The digital elevation model profile line 904 can correspond to elevations stored in digital elevation model profile at which a navigation line is positioned. The Compare X-Section bathy/topo with DEM profile at LoS location window 902 can allow for adjustment of the navigation line similar to the second window as described by block 306 and further described by block 416.
[0066] Editing tools 908 located within the Compare X-Section bathy/topo with DEM profile at LoS location window 902 can be used to adjust the position and geometry of the navigation line in the first window (e.g., FIG. 5). The Editing tools 908 can be used to then load the digital elevation model profile at the adjusted navigation line location to update the digital elevation model profile line for additional comparison against the bathymetry/topography delineated in the subsurface structure image. The Compare X-Section bathy/topo with DEM profile at LoS location window 902 can include any number of icons, buttons, menu options, and the like to implement the examples. The Compare X-Section bathy/topo with DEM profile at LoS location window 902 can be accessed from the Line of Section data capture form 504.
[0067] FIG. 10 is a display of a graphical user interface including a digitized georeferencing representation in a plan view that is updated based on a comparison of a subsurface structure image and digital elevation model profile in a cross- sectional view according to one example. The Compare X-Section bathy/topo with DEM profile at LoS location window 902 can be juxtaposed against the graphical user interface 502, which can allow a user to observe real time adjustment of the navigation line. The original navigation line 508 shown in the graphical user interface 502 can be adjusted via the Compare X-Section bathy/topo with DEM profile at LoS location window 902, as described by blocks 306, 308, and 418. In response to automatically detecting adjustments to the navigation line in the Compare X-Section bathy/topo with DEM profile at LoS location window 902, the original navigation line 508 can be automatically adjusted within the graphical user interface 502 and represented as an adjusted navigation line 1002. The Compare X- Section bathy/topo with DEM profile at LoS location window 902 can include a redraw profile button, which can send the geometry of the adjusted navigation line 1002 to the digital elevation model profile, returning a new elevation profile. The new elevation profile corresponding to the adjusted navigation line 1002 can be superimposed against the subsurface structure image in the Compare X-Section bathy/topo with DEM profile at LoS location window 902.
[0068] FIG. 11 is a display of a graphical user interface including a revised line of section data capture form according to one example. The Line of Section data capture form 1102 can be the same form as Line of Section data capture form 504. This immediate example depicts a Line of Section data capture form 1102 that includes more than one cross-sectional image. A single line of section as defined by the Line of Section data capture form 1102 may include more than one cross- sectional image.
[0069] In some aspects, systems, devices, and methods for importing, error- checking, updating, and converting a line of section using a graphical user interface are provided according to one or more of the following examples:
[0070] As used below, any reference to a series of examples is to be understood as a reference to each of those examples disjunctively (e.g., "Examples 1-4" is to be understood as "Examples 1 , 2, 3, or 4").
[0071] Example 1 is a computer-implemented method comprising: displaying, in a first window of a graphical user interface, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generating, in a second window, a cross-sectionai view of depths of a formation, the cross-sectional view including a subsurface structure image comprising measured depth information for an area; comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, changing the position of the navigation line in the first window to form an adjusted navigation line; storing at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transforming the line of section into a format for a three-dimensional viewing environment.
[0072] Example 2 is the computer-implemented method of example 1 , wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
[0073] Example 3 is the computer-implemented method of example 1 , wherein comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line includes adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
[0074] Example 4 is the computer-implemented method of example 1 , wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
[0075] Example 5 is the computer-implemented method of example 1 , wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
[0076] Example 6 is the computer-implemented method of example 1 , further comprising: receiving an associated location map or georeferencing coordinates for the cross-sectional view; and determining an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
[0077] Example 7 is the computer-implemented method of example 1 , wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
[0078] Example 8 is a system comprising: a processing device; a digital displaying device; a memory device; and a non-transitory computer-readable medium including code that is executable by the processing device to: display, in a first window of a graphical user interface via the digital displaying device, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generate, in a second window viewable by the digital displaying device, a cross-sectional view of depths of a formation, the cross-sectional view including a subsurface structure image comprising measured depth information for an area; compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, change the position of the navigation line in the first window to form an adjusted navigation line; store, in the memory device, at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transform the line of section into a format for a three-dimensional viewing environment.
[0079] Example 9 is the system of example 8, wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
[0080] Example 10 is the system of example 8, the non-transitory computer- readable medium including code executable by the processing device to compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line further includes adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
[0081] Example 11 is the system of example 8, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
[0082] Example 12 is the system of example 8, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
[0083] Example 13 is the system of example 8, the non-transitory computer- readable medium further including code that is executable by the processing device to: receive an associated location map or georeferencing coordinates for the cross- sectional view; and determine an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
[0084] Example 14 is the system of example 8, wherein the format for a three- dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
[0085] Example 15 is a non-transitory computer-readable medium that includes instructions that are executable by a processing device to: display, in a first window of a graphical user interface, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generate, in a second window, a cross-sectional view of depths of a formation , the cross-sectional view including a subsurface structure image comprising measured depth information for an area; compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, change the position of the navigation line in the first window to form an adjusted navigation line; store at least some characteristics of the cross- sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transform the line of section into a format for a three-dimensional viewing environment. [0086] Example 16 is the non-transitory computer-readable medium of example 15, wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data- acquiring vessel acquired the measured depth information of the subsurface structure image.
[0087] Example 17 is the non-transitory computer-readable medium of example 15, wherein the instructions that are executable by the processing device to compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line further include adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
[0088] Example 18 is the non-transitory computer-readable medium of example 15, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry, and wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
[0089] Example 19 is the non-transitory computer-readable medium of example 15, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
[0090] Example 20 is the non-transitory computer-readable medium of example 15, wherein the instructions are executable by the processing device to: receive an associated location map or georeferencing coordinates for the cross- sectional view; and determine an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
[0091] Example 21 is a computer-implemented method comprising: displaying, in a first window of a graphical user interface, a navigation line of a data- acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generating, in a second window, a cross-sectional view of depths of a formation, the cross-sectional view including a subsurface structure image comprising measured depth information for an area; comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, changing the position of the navigation line in the first window to form an adjusted navigation line; storing at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transforming the line of section into a format for a three-dimensional viewing environment.
[0092] Example 22 is the computer-implemented method of example 21 , wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
[0093] Example 23 is the computer-implemented method of any of examples 21 to 22, wherein comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line includes adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
[0094] Example 24 is the computer-implemented method of any of examples 21 to 23, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
[0095] Example 25 is the computer-implemented method of any of examples 21 to 24, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
[0096] Example 26 is the computer-implemented method of any of examples 21 to 25, further comprising: receiving an associated location map or georeferencing coordinates for the cross-sectional view; and determining an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
[0097] Example 27 is the computer-implemented method of any of examples 21 to 26, wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
[0098] Example 28 is a non-transitory computer-readable medium that includes instructions that are executable by a processing device to: display, in a first window of a graphical user interface, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generate, in a second window, a cross-sectional view of depths of a formation , the cross-sectional view including a subsurface structure image comprising measured depth information for an area; compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, change the position of the navigation line in the first window to form an adjusted navigation line; store at least some characteristics of the cross- sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and transform the line of section into a format for a three-dimensional viewing environment.
[0099] Example 29 is the non-transitory computer-readable medium of example 28, wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data- acquiring vessel acquired the measured depth information of the subsurface structure image.
[00100] Example 30 is the non-transitory computer-readable medium of any of examples 28 to 29, wherein the instructions that are executable by the processing device to compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line further include adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile. [00101] Example 31 is the non-transitory computer-readable medium of any of examples 28 to 30, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
[00102] Example 32 is the non-transitory computer-readable medium of any of examples 28 to 31wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
[00103] Example 33 is the non-transitory computer-readable medium of any of examples 28 to 32, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
[00104] Example 34 is the non-transitory computer-readable medium of any of examples 28 to 33, wherein the instructions are executable by the processing device to: receive an associated location map or georeferencing coordinates for the cross- sectional view; and determine an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
[00105] Example 35 is the non-transitory computer-readable medium of any of examples 28 to 34, wherein the non-transitory computer-readable medium is in a system that comprises: the processing device; a digital displaying device for displaying the first window and the second window; and a memory device for storing at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section.
[00106] The foregoing description of certain examples, including illustrated examples, has been presented only for the purpose of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Numerous modifications, adaptations, and uses thereof will be apparent to those skilled in the art without departing from the scope of the disclosure.

Claims

Claims What is ciaimed is:
1. A computer-impiemented method comprising:
displaying, in a first window of a graphical user interface, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto;
generating, in a second window, a cross-sectional view of depths of a formation, the cross-sectional view including a subsurface structure image comprising measured depth information for an area;
comparing the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line; in response to automatically detecting an adjustment to a position of the navigation line in the second window, changing the position of the navigation line in the first window to form an adjusted navigation line;
storing at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and
transforming the line of section into a format for a three-dimensional viewing environment.
2. The computer-implemented method of claim 1 , wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
3. The computer-implemented method of claim 1 , wherein comparing the cross- sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line includes adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
4. The computer-implemented method of claim 1 , wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
5. The computer-implemented method of claim 1 , wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
6. The computer-implemented method of claim 1 , further comprising:
receiving an associated location map or georeferencing coordinates for the cross-sectional view; and
determining an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
7. The computer-implemented method of claim 1 , wherein the format for a three- dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
8. A system comprising:
a processing device;
a digital displaying device;
a memory device; and
a non-transitory computer-readable medium including code that is executable by the processing device to:
display, in a first window of a graphical user interface via the digital displaying device, a navigation line of a data-acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto; generate, in a second window viewable by the digital displaying device, a cross-sectional view of depths of a formation, the cross-sectional view including a subsurface structure image comprising measured depth information for an area;
compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line;
in response to automatically detecting an adjustment to a position of the navigation line in the second window, change the position of the navigation line in the first window to form an adjusted navigation line;
store, in the memory device, at least some characteristics of the cross- sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and
transform the line of section into a format for a three-dimensional viewing environment.
9. The system of claim 8, wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
10. The system of claim 8, the non-transitory computer-readable medium including code executable by the processing device to compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line further includes adjusting the cross-sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
11. The system of claim 8, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry.
12. The system of claim 8, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
13. The system of claim 8, the non-transitory computer-readable medium further including code that is executable by the processing device to:
receive an associated location map or georeferencing coordinates for the cross- sectional view; and
determine an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
14. The system of claim 8, wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
15. A non-transitory computer-readable medium that includes instructions that are executable by a processing device to:
display, in a first window of a graphical user interface, a navigation line of a data- acquiring system in a plan view of a digitized georeferencing representation, the digitized georeferencing representation having attributes appended thereto;
generate, in a second window, a cross-sectional view of depths of a formation , the cross-sectional view including a subsurface structure image comprising measured depth information for an area;
compare the cross-sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line;
in response to automatically detecting an adjustment to a position of the navigation line in the second window, change the position of the navigation line in the first window to form an adjusted navigation line; store at least some characteristics of the cross-sectional view of the formation and the plan view, along with the appended attributes and the adjusted navigation line as a line of section; and
transform the line of section into a format for a three-dimensional viewing environment.
16. The non-transitory computer-readable medium of claim 15, wherein the subsurface structure image corresponds to the position of the navigation line, the navigation line corresponding to locations at which the data-acquiring vessel acquired the measured depth information of the subsurface structure image.
17. The non-transitory computer-readable medium of claim 15, wherein the instructions that are executable by the processing device to compare the cross- sectional view of depths of the formation to a digital elevation model profile to determine an adjustment to a position of the navigation line further include adjusting the cross- sectional view in the second window to match the digital elevation model profile, the adjustment to the position of the navigation line conforming to a match between the cross-sectional view and the digital elevation model profile.
18. The non-transitory computer-readable medium of claim 15, wherein the attributes include an image type, data source information, a top depth, a base depth, a domain type, a measurement unit type, a geological age range, a line length, and a navigation line geometry, and wherein the format for a three-dimensional viewing environment includes a SEG-Y format, the SEG-Y format being a standardized file format for storing geophysical data.
19. The non-transitory computer-readable medium of claim 15, wherein the adjusted navigation line more accurately represents the locations at which the data-acquiring system acquired the measured depth information as compared to the navigation line, the measured depth information including time domain data.
20. The non-transitory computer-readable medium of claim 15, wherein the instructions are executable by the processing device to: receive an associated location map or georeferencing coordinates for the cross- sectional view; and
determine an initial location of the navigation line within the digitized georeferencing representation based on at least one of the associated location map and the georeferencing coordinates.
PCT/US2018/036028 2018-06-05 2018-06-05 Automatically updating a georeferencing graphical user interface for navigation line adjustments Ceased WO2019236064A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2018/036028 WO2019236064A1 (en) 2018-06-05 2018-06-05 Automatically updating a georeferencing graphical user interface for navigation line adjustments
FR1903593A FR3082026A1 (en) 2018-06-05 2019-04-03 AUTOMATIC UPDATE OF A GEOREFERENCING GRAPHICAL USER INTERFACE FOR NAVIGATION LINE ADJUSTMENTS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2018/036028 WO2019236064A1 (en) 2018-06-05 2018-06-05 Automatically updating a georeferencing graphical user interface for navigation line adjustments

Publications (1)

Publication Number Publication Date
WO2019236064A1 true WO2019236064A1 (en) 2019-12-12

Family

ID=68728499

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/036028 Ceased WO2019236064A1 (en) 2018-06-05 2018-06-05 Automatically updating a georeferencing graphical user interface for navigation line adjustments

Country Status (2)

Country Link
FR (1) FR3082026A1 (en)
WO (1) WO2019236064A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113252072A (en) * 2021-02-02 2021-08-13 中国人民解放军海军大连舰艇学院 Digital water depth model navigable capability assessment method based on ring window
CN118133592A (en) * 2024-05-10 2024-06-04 贵州省水利水电勘测设计研究院有限公司 A three-dimensional expression and data storage method for special trend trace lines

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030012411A1 (en) * 2001-07-13 2003-01-16 Sjostrom Keith Jerome System and method for displaying and collecting ground penetrating radar data
US20090043507A1 (en) * 2007-08-01 2009-02-12 Austin Geomodeling, Inc. Method and system for dynamic, three-dimensional geological interpretation and modeling
US20090271719A1 (en) * 2007-04-27 2009-10-29 Lpa Systems, Inc. System and method for analysis and display of geo-referenced imagery
US8605549B1 (en) * 2012-05-23 2013-12-10 The United States Of America As Represented By The Secretary Of The Navy Method for producing a georeference model from bathymetric data
US20140041041A1 (en) * 2011-01-31 2014-02-06 The Government Of The United States Of America, As Represented By The Secretary Of The Navy Meteorology and oceanography geospatial analysis toolset

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030012411A1 (en) * 2001-07-13 2003-01-16 Sjostrom Keith Jerome System and method for displaying and collecting ground penetrating radar data
US20090271719A1 (en) * 2007-04-27 2009-10-29 Lpa Systems, Inc. System and method for analysis and display of geo-referenced imagery
US20090043507A1 (en) * 2007-08-01 2009-02-12 Austin Geomodeling, Inc. Method and system for dynamic, three-dimensional geological interpretation and modeling
US20140041041A1 (en) * 2011-01-31 2014-02-06 The Government Of The United States Of America, As Represented By The Secretary Of The Navy Meteorology and oceanography geospatial analysis toolset
US8605549B1 (en) * 2012-05-23 2013-12-10 The United States Of America As Represented By The Secretary Of The Navy Method for producing a georeference model from bathymetric data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ERIC C. TATE ET AL.: "Creating a Terrain Model for Floodplain Mapping", JOURNAL OF HYDROLOGIC ENGINEERING, vol. 7, no. 2, 1 March 2002 (2002-03-01), pages 100 - 108, Retrieved from the Internet <URL:https://ceprofs.civil.tamu.edu/folivera/Papers%20PDFs/2002%20Creating%20a%20Terrain%20Model%20for%20Floodplain%20Mapping.pdf> *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113252072A (en) * 2021-02-02 2021-08-13 中国人民解放军海军大连舰艇学院 Digital water depth model navigable capability assessment method based on ring window
CN118133592A (en) * 2024-05-10 2024-06-04 贵州省水利水电勘测设计研究院有限公司 A three-dimensional expression and data storage method for special trend trace lines

Also Published As

Publication number Publication date
FR3082026A1 (en) 2019-12-06

Similar Documents

Publication Publication Date Title
Over et al. Processing coastal imagery with Agisoft Metashape Professional Edition, version 1.6—Structure from motion workflow documentation
CN109285220B (en) Three-dimensional scene map generation method, device, equipment and storage medium
AU2018212700B2 (en) Apparatus, method, and system for alignment of 3D datasets
US8396284B2 (en) Smart picking in 3D point clouds
US9646415B2 (en) System and method for visualizing multiple-sensor subsurface imaging data
Drap Underwater photogrammetry for archaeology
KR101938402B1 (en) Drawing image compositing system for features in blind area
CN114170393A (en) Three-dimensional map scene construction method based on multiple data
Seers et al. Extraction of three-dimensional fracture trace maps from calibrated image sequences
WO2022192858A1 (en) System and method for collecting and georeferencing 3d geometric data associated with a gps-denied environment
KR101509143B1 (en) Method for Handling 3-Dimensional Scanning Data for Excavation and Analysis of Cultural Assets
CN113532424B (en) Integrated equipment for acquiring multidimensional information and cooperative measurement method
CN111047699A (en) Seabed detection image display method, device, equipment and storage medium
CN117095141B (en) Construction method of river three-dimensional model and application of river three-dimensional model in inland navigation prediction
CN114663607A (en) Three-dimensional reconstruction method and system for submarine shallow stratum terrain
WO2019236064A1 (en) Automatically updating a georeferencing graphical user interface for navigation line adjustments
Zhu et al. Triangulation of well-defined points as a constraint for reliable image matching
Zhang et al. Primitive-based building reconstruction by integration of Lidar data and optical imagery
JP2021173801A (en) Information processing equipment, control methods, programs and storage media
CN119146922B (en) A method for calculating settlement of tectonic stress-type metal mine subsidence areas based on drone photography
De Reu et al. Orthophoto mapping and digital surface modeling for archaeological excavations an image-based 3D modeling approach
CN113610909B (en) A point cloud profile generation system and method based on distance search
CN117671637A (en) Object point cloud overlapping identification method, device, equipment, robot and storage medium
Woolard et al. Shoreline mapping from airborne lidar in Shilshole Bay, Washington
KR100782152B1 (en) A method of acquiring three-dimensional data of a building from aerial photographs DX to produce a three-dimensional digital map

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921657

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18921657

Country of ref document: EP

Kind code of ref document: A1