[go: up one dir, main page]

GB2635677A - Robotic vehicle navigation system and method - Google Patents

Robotic vehicle navigation system and method Download PDF

Info

Publication number
GB2635677A
GB2635677A GB2317696.9A GB202317696A GB2635677A GB 2635677 A GB2635677 A GB 2635677A GB 202317696 A GB202317696 A GB 202317696A GB 2635677 A GB2635677 A GB 2635677A
Authority
GB
United Kingdom
Prior art keywords
sensor
pose
robotic vehicle
reconstruction
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2317696.9A
Inventor
Martin Cross Gary
Robert Goodall Mark
Pehlivanturk Can
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rovco Ltd
Original Assignee
Rovco Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rovco Ltd filed Critical Rovco Ltd
Priority to GB2317696.9A priority Critical patent/GB2635677A/en
Priority to PCT/GB2024/052892 priority patent/WO2025109305A1/en
Publication of GB2635677A publication Critical patent/GB2635677A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/656Interaction with payloads or external entities
    • G05D1/689Pointing payloads towards fixed or moving targets
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63GOFFENSIVE OR DEFENSIVE ARRANGEMENTS ON VESSELS; MINE-LAYING; MINE-SWEEPING; SUBMARINES; AIRCRAFT CARRIERS
    • B63G8/00Underwater vessels, e.g. submarines; Equipment specially adapted therefor
    • B63G8/001Underwater vessels adapted for special purposes, e.g. unmanned underwater vessels; Equipment specially adapted therefor, e.g. docking stations
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/22Command input arrangements
    • G05D1/221Remote-control arrangements
    • G05D1/222Remote-control arrangements operated by humans
    • G05D1/224Output arrangements on the remote controller, e.g. displays, haptics or speakers
    • G05D1/2244Optic
    • G05D1/2245Optic providing the operator with a purely computer-generated representation of the environment of the vehicle, e.g. virtual reality
    • G05D1/2246Optic providing the operator with a purely computer-generated representation of the environment of the vehicle, e.g. virtual reality displaying a map of the environment
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • G05D1/2465Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM] using a 3D model of the environment
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2105/00Specific applications of the controlled vehicles
    • G05D2105/80Specific applications of the controlled vehicles for information gathering, e.g. for academic research
    • G05D2105/89Specific applications of the controlled vehicles for information gathering, e.g. for academic research for inspecting structures, e.g. wind mills, bridges, buildings or vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2107/00Specific environments of the controlled vehicles
    • G05D2107/25Aquatic environments
    • G05D2107/27Oceans
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/30Water vehicles
    • G05D2109/38Water vehicles operating under the water surface, e.g. submarines
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A navigation system and method for manoeuvring a subsea robotic vehicle in a subsea environment. A user may input a selection of a viewpoint for the robotic vehicle to navigate to. The viewpoint is a 3D data point which is part of a 3D model. A navigation computing system with a local 3D model of the environment receives the viewpoint and a planning system determines the location of the viewpoint within its local 3D model. Within the local 3D model, a normal to the surface on which the viewpoint is located is calculated. The navigation system then determines a final pose of the robotic vehicle based on a distance from the normal, and a known camera/range sensor pose of the robotic vehicle. The planning system calculate a path to the final pose and can cause the propulsion system to move the robotic vehicle based on the path. The final pose ensures that when the robotic vehicle moves to the final pose that the viewpoint is within an FOV of the robotic vehicle’s sensor.

Description

Robotic Vehicle Navigation System and Method
Background
Remotely Operated Vehicles (ROV's) are used for subsea tasks such as surveys and mapping of underwater assets. ROVs are typically controlled by an experienced pilot using a joystick connected to a computer on the surface, typically onboard a support vessel or other nearby surface location. The joystick provides commands in the lateral x,y,z and rotational yaw axis and the computer converts these commands into thruster allocated demands. An important aspect of the pilots' responsibilities is to position the ROV, so a sensor such as a camera is viewing a region of interest for a particular task. This is a task which requires highly skilled operators with years of operational experience.
Under normal operation the pilot will be responsible for all aspects of the control and navigation of the ROV, which is a highly labour-intensive process. This often requires more than one pilot to support in shifts. The 1:1 relationship between the operator and ROV is further constrained by the need to have low latency communications, as real-time video stream is important for visual feedback of the vehicle motion in dynamic environments. The pilots are therefore typically controlling the ROV directly from a support vessel or through telepresence from nearby location.
Summary
The present inventors have devised a robotic vehicle navigation system and method which can exhibit one or more of the following advantages relative to known systems: * Simpler ROV control, such that non-skilled operators can operate the ROV; * More efficient ROV movement in an operational environment; * Reduced bandwidth requirements of the communication system; * Reliable and repeatable views of a given scene or location, which can improve mapping and 3D reconstructions; * Improved repeatability of the survey as a whole.
By way of a non-limiting overview, embodiments of the invention relate to a navigation system and method for manoeuvring a subsea robotic vehicle in a subsea environment. An operator may input a selection of a viewpoint for the robotic vehicle to navigate to. The viewpoint is a 3D data point which is part of a 3D model. A navigation computing system with a local 3D model of the environment receives the viewpoint and a planning system determines the location of the viewpoint within its local 3D model. Within the local 3D model, a normal vector from the surface on which the viewpoint is located is calculated. The navigation system then determines a final pose of the robotic vehicle based on a distance along the normal vector, and a known camera/range sensor pose of the robotic vehicle. The planning system calculates a path to the final pose and can cause the propulsion system to move the robotic vehicle based on the path. The final pose ensures that when the robotic vehicle moves to the final pose that the viewpoint is within an FOV of the robotic vehicle's sensor.
In accordance with a first aspect of the invention, there is provided a navigation system for manoeuvring a subsea robotic vehicle in an environment, the navigation system comprising a user terminal computing device and a navigation system, the robotic vehicle comprising: a body; a sensor for underwater mapping, the sensor having a known pose with
respect to the body and a field of view, FOV;
a pose sensor for determining the pose of the body; a propulsion system for moving the body, the user terminal computing device comprising: a first [remote] 3D reconstruction of the environment, the first 3D reconstruction comprising a plurality of first 3D data points; and a selection device configured to enable selection of one of the first 3D data points for the robotic vehicle to view with the sensor, the navigation computing system comprising: a second [local] 3D reconstruction of the environment, the second 3D reconstruction comprising a plurality of second 3D data points corresponding to the plurality of first 3D data points; and a planning system configured to: query the second 3D reconstruction in response to receiving a signal representative of the selected one of the first 3D data points, to obtain one of the second 3D data points, wherein the second 3D data point corresponds to the first 3D data point; determine a surface on which the second 3D data point is located; calculate the normal to the surface; determine a first distance from the surface in the direction of the surface normal to define an initial target pose; determine a final target pose, based on the initial target pose and the known sensor pose with respect to the body, such that the second 3D data point is estimated to be within an FOV of the sensor when the pose of the body is at the final target pose; calculate a path between the pose of the body of the robotic vehicle and the final target pose; and generate a command signal to cause the propulsion system to move the body of the robotic vehicle based on the path.
Thus, the navigation system according to the first aspect of the invention enables commanding of a vehicle to view a given area, through interaction with a 3D reconstructed model of a scene which may or may not be completed. An operator can set a specific point they wish to view, and without the need to use a joystick or other continuous control input, the vehicle will be manoeuvred into a pose that ensures a specified viewpoint is within at least one sensor's optimal field of view. This supports the control of a vehicle over high latency low bandwidth communications and radically changes the relationship in the control of a vehicle from 1:1 to many to many, where the operator need only point and click a viewpoint on a map to manoeuvre the vehicle.
Optionally, the sensor is movably coupled to the body with at least 1 degrees of freedom.
Optionally, the planning system comprises a forward kinematic generator being configured to calculate the pose of the sensor relative to the pose of the body to determine the transformation between the initial target pose and the final target pose.
Optionally, determining the final target pose comprises applying the transformation to the initial target pose.
Optionally, the sensor is movably coupled to the body by at least one joint comprising a joint position sensor, wherein the forward kinematic transform is based on an output signal from the joint position sensor.
Optionally, the sensor is movably coupled to the body with at least 2 degrees of freedom comprising movement in pan and tilt axis.
Optionally, a subset of the plurality of second data points define a surface area of the surface. Optionally, the normal to the surface is calculated based on the average plane of the surface area.
Optionally, the final target pose is determined such that the sensor can view the surface area when the body is at the final target pose.
Optionally, the final target pose is based on the FOV of the sensor.
Optionally, the normal to the surface is calculated by analysing the gradient of the surface surrounding the target pose. Optionally, the normal to the surface is determined by querying the second 3D reconstruction where the second 3D reconstruction may contain surface normal information (e.g. spline based surfaces).
Optionally, the path is calculated by defining a network of connected nodes to obtain an obstacle-free path between a node equating to the pose of the body of the robotic vehicle and a node equating to the final target pose. In one example, the path is calculated based on a PRM or PRM* algorithm.
Optionally, the path is calculated: based on an initial pose of the body with respect to the surface and/or the first distance; and, to traverse the surface to arrive at the target pose.
Optionally, querying the second 3D reconstruction comprises searching the second 3D reconstruction to identify the closest approximation of one of the first 3D data points within a determined search radius, wherein one of the second 3D data points is the closest approximation of the one of the first 3D data points.
Optionally, the first 3D reconstruction is either: dynamic and based on data received from the second 3D reconstruction; or static and is predetermined.
Optionally, the second 3D reconstruction is down sampled and transmitted to the user terminal computing device.
Optionally, the plurality of second 3D data points are generated based on sensor data from the sensor.
Optionally, the navigation computing system comprises a vehicle computing device, wherein the robotic vehicle comprises the vehicle computing device.
Optionally, the second 3D reconstruction of the environment is constructed using SLAM or global navigation techniques.
Optionally, the robotic vehicle further comprises a range sensor, wherein the plurality of second 3D data points are further generated based on range sensor data from the range sensor.
Optionally, the navigation computing system is a vehicle computing device.
Optionally, the command signal defines the final target pose, and the navigation computing system calculates the path.
Optionally, the command signal is provided directly to the propulsion system. Optionally, the command signal provides 6 degrees of freedom for control in three translational axis (x, y, z) and three rotational axis (roll, pitch, yaw).
In accordance with a second aspect of the invention, there is provided a method of navigating a subsea robotic vehicle in an environment, the robotic vehicle comprising a body; a sensor for underwater mapping, the sensor having a pose with respect to the body and a field of view, FOV, the method comprising: receiving a viewpoint signal comprising a 3D representation of a viewpoint location within an environment; executing a 3D reconstruction algorithm to generate a 3D reconstruction of the environment, the 3D reconstruction comprising a plurality of 3D data points; and querying the 3D reconstruction in response to receiving the viewpoint signal to obtain a 3D data point within the 3D reconstruction, wherein the 3D data point corresponds to the viewpoint location; determining a surface on which the 3D data point is located; calculating the normal to the surface; determining a first distance from the surface in the direction of the surface normal to define an initial target pose; obtaining a sensor pose representing the pose of the sensor of the subsea robotic vehicle, and a body pose representing the pose of the body of the subsea robotic vehicle; determining a final target pose, based on the initial target pose and the sensor pose, such that the 3D data point is within an estimated FOV of the sensor; calculating a path between the pose of the body of the robotic vehicle and the final target pose; and generating a command signal to cause the body of the robotic vehicle to move based on the path to arrive at the final target pose.
Optionally, the method further comprises receiving sensor data representing range measurements from the sensor. Optionally, the 3D reconstruction algorithm updates the 3D reconstruction from the sensor data.
In accordance with a third aspect of the invention, there is provided a subsea robotic vehicle implementing the method of the second aspect.
Brief Description of the Drawings
Figure 1 is a diagram of a navigation system according to an embodiment of the invention.
Figure 2 is a diagram of subsea environment illustrating the operation of a vehicle with respect to the environment.
Figure 3 is a diagram of subsea environment illustrating the target pose of a vehicle with respect to the environment.
Figure 4 is a diagram of virtual subsea environment illustrating the target pose of a vehicle with respect to the virtual environment.
Figure 5 is a flow diagram illustrating a method of navigating a vehicle according to an embodiment of the invention.
Figure 6 is a diagram of an alternative navigation system according to an embodiment of the invention.
Figure 7 is a diagram of an alternative navigation system according to an embodiment of the invention.
Detailed Description
By way of a non-limiting overview, the present invention relates to a new architecture and method for controlling a robotic vehicle, such as an ROV, without the need to use a joystick, by allowing an operator to control the position of a vehicle through interaction with a reconstruction of a scene to select a desired viewpoint by a point and click action. This invention produces a navigation viewpoint positioned with respect to the environment that has been observed and mapped by the vehicle autonomously, using distance to a structure in combination with the surface geometry and trajectory estimation algorithms. The sensor field of view can be centred on the viewpoint selected by the operator, and accounts for the position and orientation of the sensor with respect to the vehicle body; by movement of the vehicle body and or the joints that are connected to the camera, such as a pan and tilt unit or robotic arm.
In harsh environments, such as subsea, computing resources are often constrained due to protective casings, for example, communications between the computing device on a vehicle and a larger network may also be restricted in speed or throughput.
Embodiments of the invention provide survey efficiency increases which enable a robotic vehicle to better survey and manoeuvre in subsea environment scenarios. In addition, embodiments of the invention may remove the need for the pilot to worry about the environment, i.e., current or tidal perturbations.
Figure 1 shows a navigation system 1 for manoeuvring a subsea robotic vehicle in an environment. The navigation system 1 comprises at least a robotic vehicle 18, a user terminal computing device 10, and a navigation computing system 32. It is shown in Figure 1 that the robotic device 18 comprises the navigation computing system 32 and the navigation computing system 32 comprises a vehicle computing device 33. However, the navigation computing system 32 may be a distributed computing system, distributed over the robotic vehicle 18, servers located on land, and/or onboard a support vessel in communication with the robotic vehicle 18.
Figure 1 shows a user terminal computing device 10 according to an embodiment of the invention. The user terminal computing device 10 can comprise a remote 3D reconstruction 12 of an environment (e.g., a 3D model of a subsea environment) in which the robotic vehicle 18 will manoeuvre. The remote 3D reconstruction 12, i.e., a 3D model, may comprise a plurality of remote 3D data points and be represented as a single data structure (map) containing multiple remote 3D data points or generated from multiple smaller data structures (sub maps) combined into a more complex geometric model, such as a graph, tree, point cloud, disparity map, mesh, an octree, and/or any presentation of a 3D rendered scene. That is, a 3D data point may be any 3D data defining a spatial characteristic, such as, a mesh surface, point cloud point, voxel, etc. The remote 3D reconstruction 12 can be constructed in real time or can be predetermined (e.g., from a priori knowledge). The remote 3D reconstruction 12 may be provided by one or more User Interfaces (UIs) on the user terminal computing device 10.
The remote 3D reconstruction 12 may be constructed in real time, such that the user terminal computing device 10 maintains a version or representation of a local 3D reconstruction 12a generated from received measurements (e.g., range measurements) from a robotic vehicle 18. The local 3D reconstruction 12a and the remote 3D reconstruction 12a may be represented or generated using different or similar methods.
The user terminal computing device 10 may allow an operator to manipulate a view of the remote 3D reconstruction (e.g., via the UI) and/or select a viewpoint 14 to be viewed by one or more sensors 22, 28 of the robotic vehicle 18. Thus, the remote 3D reconstructions 12 may be static (e.g., if generated from predetermined, or a priori, knowledge) or dynamic (e.g., if generated based on the local 3D reconstruction 12a, or based on received measurements).
The operator may be an external system or a user. The external system may be a selection device 16, such as a computer system running an algorithm suitable for selecting survey viewpoints. Alternatively, the user may interact with the selection device 16, such as a computer mouse. The selection device 16 provides a selection input to the remote 3D reconstruction 12 (e.g., via a UI).
Advantageously the operator does not need to know how to orientate or position the robotic vehicle 18 to ensure a good view of a viewpoint on a structure or surface in the environment. This reduces the cognitive/compute load on the operator and the required level of training to operate the robotic vehicle 18. This makes the robotic vehicle 18 easier to control for the decision-making operator and removes the need for the operator to react to environmental perturbations such as changes in water current direction.
In a system with multiple user terminal computing devices (e.g., 10), each device may maintain its own version of a remote 3D reconstruction (e.g., 12). If each of the remote 3D reconstructions are constructed in real time, then each is presented in a 3D rendered scene to display a representation of the local 3D reconstruction 12a (i.e., a 3D model) of the navigation computing system 32. Each of the user terminal computing devices may allow a respective operator to manipulate the view of the scene and select a point to be viewed by sensors of a robotic vehicle. This allows for a many-to-one, or a manyto-many relationship between operators and one or more robotic vehicles 18, where the operators can control the robotic vehicles (e.g., 18) through the user terminal computing device 10. The user terminal computing device 10 may, for example, be a web interface.
In addition, if two or more operators are providing a respective selection input for a single robotic device 18, then the robotic device 18 may comprise a system of prioritisation based on operator privileges, for example, a leader-follower or requester-approver pattern for prioritisation, although others are foreseeable.
The selection input may be an input from a user or an external system that may automatically issue a desirable viewpoint. If the external system is used to provide a selection input, then the viewpoint 14 may be a feature or landmark which is considered important to the survey task which the external system is configured to search for and/or generate a 3D reconstruction. A viewpoint 14 comprises at least data representative of one of the remote 3D data points. A viewpoint 14 may comprise data representative of more than one of the remote 3D data points. More than one of the remote 3D data points may be representative of a landmark or environmental feature in the survey environment.
Once a viewpoint 14 is selected it may be transmitted to a robotic vehicle 18 operating in a subsea environment via a communication link (wired or wireless). The communication link may be enabled via transceiver 29 of the user terminal computing device 10 and transceiver 31 of the robotic vehicle 18. The communication link may be a low bandwidth link. The communication link may be bidirectional, or unidirectional from the user terminal computing device 10 to the robotic vehicle 18.
The robotic vehicle 18 comprises a body 20, a sensor 22, a pose sensor 24, a propulsion system 26, an optional additional sensor 28 (e.g., a range sensor), an optional joint position sensor 30, an optional transceiver 31, and a vehicle computing device 33. The robotic vehicle 18 may be tasked with surveying a subsea region or asset. As shown in Figure 1 the body 20 may comprise: the pose sensor 24, the propulsion system 26, the additional sensor 28, the transceiver 31, the navigation computing system 32, and the vehicle computing system 33. In addition, the sensor 22 may be coupled to the body 20 to form the robotic vehicle 18 The robotic vehicle 18 can comprise a remotely operable or autonomous mobile platform such as an underwater remotely operable vehicle (ROV), an autonomous underwater vehicle (AUV), an unmanned air vehicle (UAV), an unmanned ground vehicle (UGV), an unmanned underwater vehicle (UUV), or an unmanned surface vehicle (USV). When applied to a vehicle such as an autonomous or unmanned system, the navigation computing system 32 can be used for simultaneous localization and mapping (SLAM) via a local 3D reconstruction algorithm.
The pose sensor 24 is suitable for determining the pose of the body 20. The pose sensor 24 is a source of pose (i.e., orientation and/or position) data for the body 20. The pose sensor 24 may be a physical sensor, or may be a component of the navigation computing system 32. The pose sensor 24 may be suitable for determining the pose of the sensor 22 and any additional sensors, such as the additional sensor 28. The pose of the sensor 22 may be estimated based in a known spatial relationship between the dedicated pose sensor 24 and the sensor 22. The pose sensor 24 may be an inertial navigation sensor (e.g., including GPS, etc.). The pose sensor 24 may comprise multiple pose sensors, such as a pose sensor to determine the pose of the body 20, and an auxiliary pose sensor to determine the pose of the sensor 22.
In an example, the pose sensor 24 is a component of the navigation computing system 32. The pose data may be estimated based on the received images from the sensor 22, e.g., a camera (i.e., odometry), for example, the pose data may be generated by a SLAM module of the navigation computing system 32 based on a SLAM 3D reconstruction algorithm. That is, pose data associated with the sensor 22 may be estimated based on the local 3D reconstruction algorithm. In addition, pose data associated with the body 20 may be derived based on a known or calculated relationship between the pose of the sensor 22 and the body 20.
If the pose sensor 24 results in data representing the pose of the body 20, then the vehicle computing device 33 (or navigation computing system 32) of the robotic vehicle 18 may determine the pose of the sensor 22 based on a known relationship between the pose of the body 20 and the pose of the sensor 22. The pose sensor 24 may be configured to correctly place measurements from the sensor 22 (and optionally sensor 28) into the local 3D reconstruction 12a and for control of the robotic vehicle 18.
The sensor 22 is suitable for underwater mapping, and the sensor 22 has a known (e.g., predetermined) pose (i.e., spatial relationship) with respect to the body 20 and a field of view (FOV). The sensor 22 may be a camera, or a range sensor, such as a multibeam echosounder (MBES), or lidar. The robotic vehicle 18 may carry one or more additional sensors 28 on-board. The additional sensors 28 may be any suitable sensor for underwater mapping, such as an additional camera, an additional range sensor, a radiation sensor, temperature sensor, etc. In the example shown in Figure 1, the sensor 22 is distinct from the body 20 and may be movably coupled to the body 20, and the body 20 comprises the additional sensor 28. This arrangement may be advantageous if the additional sensor 28 is an MBES or lidar sensor.
Alternatively, the sensor 22 and the additional sensor 28 may be distinct from the body 20 and may be movably coupled to the body 20. In this example, the pose sensor 24 may be suitable for determining a single pose for both the sensor 22 and the additional sensor 28. The sensor 22 and additional sensor 28 may both be cameras in a stereoscopic arrangement. The stereoscopic arrangement can produce range measurements to generate the local 3D reconstruction 12a, i.e., a geometric model of the environment in a local coordinate frame. Thus, the local 3D data points of the local 3D reconstruction 12a may be generated based on sensor data from the sensor 22 and/or additional sensor 28.
One or more of the sensors 22, 28 may be movably coupled to the body 20 with at least 1 degrees of freedom (e.g., with a joint suitable for movement in the pan or tilt axis). For example, one or more of the sensors 22, 28 (e.g., encoders) may be mounted on a joint that allows for motion in pan and tilt axis (i.e., 2 degrees of freedom). If the one or more sensors 22, 28 are movably coupled, then the robotic vehicle 18 is configured to determine the pose of the sensor 22 relative to the pose of the body 20. For example, a joint position sensor 30 may be used to determine the position of the joints between the movable coupled sensor(s) 22 and the pose of the body 20. That is, the joint position sensor 30 may be configured to measure the angles of any revolute joints.
The propulsion system 26 is suitable for moving the robotic vehicle 18 within the subsea environment. The propulsion system 26 is configured to move and steer the robotic vehicle 18 in accordance with command signals provided by navigation computing system 32 which may or may not be remote with respect to the robotic vehicle 18.
The vehicle computing device 33 may be coupled to each of the sensor 22, pose sensor 24, propulsion system 26, additional sensor 28, joint position sensor 30, and a transceiver 31 for wirelessly communicating with one or more user terminal computing devices 10. The vehicle computing device 33 may receive signals from the sensor 22, pose sensor 24, additional sensor 28, joint position sensor 30, and a transceiver 31 for wirelessly receiving the viewpoint 14 from one or more user terminal computing devices 10. The vehicle computing device 33 may transmit signals to the propulsion system 26, the sensor 22, and a transceiver 31 for wirelessly transmitting local data points to one or more user terminal computing devices 10a.
The navigation computing system 32 can be arranged to be deployed on the robotic vehicle 18, e.g., via the vehicle computing device 33 (as shown in Figure 1), or may be partially deployed on the vehicle computing device 33 such that the navigation computing system 32 comprises the vehicle computing device 33 and at least one other additional computing device (as shown in Figures 5 and 6). The navigation computing system 32 may be arranged for deployment into harsh environments, such as continuous use underwater beyond two meters, subsea, or in the vacuum of space, on a platform, or the like.
The navigation computing system 32 is configured to process data from the sensor 22 and optionally the additional sensor 28. The navigation computing system comprises the local 3D reconstruction 12a of the environment (e.g., a subsea environment). The navigation computing system 32 may comprise a SLAM module to generate the local 3D reconstruction 12a, for example, from camera images and range measurements. However, any suitable method for generating a local 3D reconstruction may be used, e.g., a Global Navigation System. The local 3D reconstruction 12a may comprise a plurality of local 3D data points and be represented as a single data structure (map) containing multiple local 3D data points or generated from multiple smaller data structures (sub maps) combined into a more complex geometric model, such as a graph, tree, point cloud, disparity map, mesh, an octree, or any presentation of a 3D rendered scene. The local 3D reconstruction 12a can be constructed in real time. If the local 3D reconstruction 12a is constructed in real time, then the user terminal computing device 10 may maintain a version or representation of a remote 3D reconstruction generated from sensor measurements. Therefore, the remote 3D reconstruction 12 may comprise a plurality of remote 3D data points that may correspond to (but may not be exact replicas of) the plurality of local 3D data points of the local 3D reconstruction 12a.
If the navigation computing system 32 comprises a SLAM module to generate the local 3D reconstruction 12a, then the SLAM module may receive a camera image from the sensor 22. The SLAM module may be configured to execute a SLAM 3D reconstruction algorithm to develop the local 3D reconstruction 12a. To generate the SLAM 3D model from video or camera images, the SLAM 3D reconstruction algorithm can take key points that, for example, make up the edges of an object in the image and associates the key points with corresponding key points from other images, such as earlier captured images or images captured concurrently by other cameras (such as additional sensor 28). A SLAM model generated from using camera images, or other visual data may be called a visual SLAM (VSLAM) model. The local 3D reconstruction algorithm which generates the local 3D reconstruction 12a, e.g., a SLAM 3D mode, using camera images, or other visual data may be called a visual SLAM (VSLAM) 3D reconstruction algorithm. In another example, the additional sensor 28 may be a range sensor configured to provide one scan of range data at a time, e.g., one swath of depth soundings for a single MBES ping. The local 3D reconstruction 12a may be generated from visual or range data alone, or may be generated from a combined 3D map with both visual (from the SLAM module) and range sensor data represented.
As explained, other methods of 3D reconstruction are possible, for example, based on the type of sensor 22 used. Since the implementation of 3D reconstruction algorithms is known, per se, they will not be discussed in more detail.
Upon initiating a survey or viewpoint investigation, information about the surroundings of the robotic vehicle 18 may be incomplete and lacking information, e.g., if an area is yet to be observed. The navigation system 1 may be configured to generate command signals to operate the robotic vehicle 18 to move throughout the environment and consume real time information to generate at least part of the local 3D reconstruction 12a. Through iterative updates, the navigation system can adapt to real time changes (in location and environment) whilst continuously updating the local 3D reconstruction 12a. Therefore, the robotic vehicle 18 can advantageously operate in environments where no prior information (environmental or otherwise) exists, using a local 3D reconstruction (e.g., map) that is constructed in real-time as the sensors 22 of the robotic vehicle 18 observes its surroundings. Advantageously, the navigation system 1 can take advantage of real time observations about the deployment environment to ensure the local 3D reconstruction 12a (and optionally the remote 3D reconstruction 12), remain error free in areas where there is limited or no global positioning.
The local 3D reconstruction 12a may be maintained and used for all mapping queries required for generating viewpoints and validating paths for the robotic vehicle 18 to traverse to a viewpoint. Range measurements, and/or 3D data points, may be contained in a data structure of the local 3D reconstruction 12a, which can be transmitted to the user terminal computing device 10 efficiently and may be down sampled if required.
Once a viewpoint 14 is selected the user terminal computing device 10 transmits data corresponding to the selected viewpoint 14 to the navigation computing system 32 (e.g., via transceiver 29 or direct wired connection). The navigation computing system 32 further comprises a planning system 34. The planning system 32 is configured to query the local 3D reconstruction 12a in response to receiving the viewpoint 14, to obtain a local viewpoint 14a. The local viewpoint 14a may comprise one or more of the local 3D data points, and the one or more local 3D data points (of the local 3D reconstruction 12a) correspond to one or more of the remote 3D data points (of the remove 3D reconstruction 12). Since the local 3D reconstruction 12a may not be an exact replica of the remote 3D reconstruction 12, then the one or more of the local 3D data points defining the local viewpoint 14a may or may not exactly correspond to the one or more of the remote 3D data points defining the viewpoint 14.
To determine the location of the local viewpoint 14a within the local 3D reconstruction 12a, the planning system 34 may search for the requested viewpoint using the local 3D reconstruction 12a (e.g., within a predetermined radius) to locate the closest approximation of the viewpoint 14. For example, the local 3D reconstruction 12a may be searched to identify the closest approximation of the requested viewpoint within the predetermined radius. This may involve controlling a pan tilt unit attached to the sensor 22, for example if a pose to view the selected point is on the ground the sensor 22 may need to be point down, or for example, when traversing between objects, the sensor 22 may be controlled to point down to observe the environment between points, rather than lose tracking. This is advantageous if the local viewpoint 14a does not exactly correspond to the one or more of the remote 3D data points defining the viewpoint 14.
Thus, the local viewpoint 14a may be the closest approximation of the viewpoint 14 within the local 3D reconstruction 12a.
Once the local viewpoint 14a corresponding to the viewpoint 14 is determined by the planning system 34, the planning system 34 determines a final target pose 38b of the robotic vehicle 18 which would enable the sensor 22 to capture real-world data from a target viewpoint 14b unimpeded. The target viewpoint 14b corresponds to the local viewpoint 14a on the navigation computing system and the viewpoint 14 on the user terminal computing device 10. The final target pose 38b is determined based on the local viewpoint 14a and the field of view of the sensor 22. The robotic vehicle 18 may then plot a path and navigate (using the propulsion system 26) the path from its current physical location to the target viewpoint 14b in the real-world, possibly around complex terrain. Advantageously, the navigation system can adapt to real time changes (in location and environment) whilst continuously updating the local 3D reconstruction 12a and refining the path based on the available data.
Figure 2 shows a subsea environment 36 comprising an environmental component 37, a target viewpoint 14b, and the robotic vehicle 18. The environmental component 37 may at least in part be represented by the local 3D reconstruction 12a and the remote 3D reconstruction 12. The environmental component 37 comprises the target viewpoint 14b. The target viewpoint 14b may be any viewpoint of interest, such as, a component, a sub-component, a section of damage, and/or a section of discolouration, etc. The planning system 34 determines a viewpoint surface on which the local viewpoint 14a is located. The viewpoint surface being defined by the local 3D reconstruction 12a. If the local viewpoint 14a comprises a single local data point, then the viewpoint surface may be determined based on the single local data point and local data points in proximity to the single local data point. To determine the viewpoint surface for a single data point, surrounding data points may be considered. For example, the surrounding data points may be determined based on spatial proximity to the local viewpoint 14a (e.g., within a radius around the local viewpoint 14a), and/or may be based on a generated mesh which represents surfaces from 3D data points. For example, the local and remote 3D reconstructions 12a, 12, may comprise a surface estimation function which estimates a plurality of surface plates based on the relationships between neighbouring and proximal 3D data points. Thus, the viewpoint surface may be calculated based on the surface or surface plate which the local viewpoint 14a is situated, or if the local viewpoint 14a is a node or on a line between surface plates, then the surface plates contacting (and/or within one or more surface plates away from) the local viewpoint 14a may be averaged to determine the viewpoint surface.
If the local viewpoint 14a comprises multiple local data points, then the viewpoint surface may be determined based on the local data points of the local viewpoint 14a.
The viewpoint surface may be calculated based on the average plane of the viewpoint surface area determined by the multiple local data points.
The planning system 34 is configured to calculate the (or an estimate of the) mathematical normal 35 of the viewpoint surface. The determination of the viewpoint surface normal may provide an indication of the optimum viewing angle for the sensor 22. To calculate the mathematical normal 35 at least three data points (a triangle) in an area may be used. If multiple data points are available, an average of all data points extracted from a keyframe containing multiple points may be used to calculate the mathematical normal 35. Alternatively, or in addition as a check, if multiple data points are available, an average of all data points within a predetermine/determined radius (e.g., the neighbouring data points) may be used to calculate the mathematical normal 35. When calculating the mathematical normal 35 may be based on the location of the sensor 22 with respect to the viewpoint surface.
A first distance 39 from the surface is determined in the direction of the viewpoint surface normal 35, away from the surface, to define an initial target pose (an initial target position and an initial target orientation) 38a. The initial target pose 38a provides an estimated pose for the robotic vehicle 18 which may not be optimised for the sensor 22 to view the target viewpoint 14b, for example, the body 20 may be positioned such that the pose of the body 20 corresponds to the initial target pose 38a. A final target pose 38b is determined based on the initial target pose 38a and the pose of the sensor 22 with respect to the body 20 such that the sensor 22 can view the local viewpoint 14a when the body 20 is at the final target pose 38b. Therefore, it is determined that if the pose of the body 20 is positioned at the final target pose 38b within the environment 36, then the sensor 22 will be able to measure the target viewpoint 14b. The final target pose 38b is described with reference to Figure 3.
Figure 3 shows a virtual subsea environment 36a comprising the robotic vehicle 18, the environmental component 37, and the target viewpoint 14b. The body 20 of the robotic vehicle 18 is positioned at the final target pose 38b. The sensor 22 has a field of view as shown by dashed lines 42a, 42b, and the target viewpoint 14b is within the field of view of the sensor 22.
The planning system 34 may comprise a forward kinematic generator 21. The forward kinematic generator 21 is beneficial if the pose of the body 20 does not represent the pose of the sensor 22, particularly if the sensor 22 is movably coupled to the body 20. The forward kinematic generator 21 may be configured to calculate a forward kinematic transform to compute the pose of the sensor 22 relative to the pose of the body 20. If the sensor 22 is movably coupled to the body 20 via one or more joints, then the forwards kinematic generator 21 may receive as an input the joint position data from one or more joint position sensors 30 (corresponding to one or more joints respectively). The forward kinematic transform may be based on the joint position data (i.e., an output signal from one or more joint position sensors 30) to calculate the pose of the sensor 22 from the pose of the body 20.
The kinematic generator 21 may use the forward kinematic transform to determine the final target pose 38b. The final target pose 38b may be defined as the pose of the body 20 which results in that the field of view of the sensor 22 being centred on the target viewpoint 14b, and/or the sensor 22 being positioned at the first distance 39, known as a working distance, from the target viewpoint 14b. The first distance 39 may be predetermined. The final target pose 38b may be determined based on the initial target pose 38a and the forward kinematic transform. The final target pose 38b may be determined by calculating one or more forwards kinematic transforms between the local viewpoint 14a and the pose of the body 20. That is, determining the final target pose 38b may comprise applying the forward kinematic transform to the initial target pose 38a. The initial target pose 38a may be calculated based on the location of the target viewpoint 14b, the direction of the normal vector, and the first distance 39. This may be repeated for each data point on the surface of the environmental environment 37. Therefore, the working distance from the target viewpoint 14b may be maintained as the robotic vehicle 18 moves changes throughout the survey.
Returning to Figure 2, a path 40 is calculated between the pose of the body 20 of the robotic vehicle 18 and the final target pose 38 based on known current pose of the robotic vehicle 18 (e.g., via the pose sensor 24), and the determined final target pose 38b. The planning system 34 is then configured to generate a command signal to cause the propulsion system 26 to move the body of the robotic vehicle 20 based on the path 40.
The path 40 may be a direct path from the current pose of the robotic vehicle 18 to the final target pose 38b. As shown in Figure 2, the path 40 may be a valid spline or line that represents a smooth continuous path that traverses across the surface of the environmental component 37. The path 40 may be validated to ensure the path 40 meets certain operational requirements, such as but not limited to, proximity to a seabed, maintaining the working distance from 3D data points on the environmental component 37, shortest path, most efficient path and free of collisions. Any planning system 34 for generating the path 30 may be used. For example, the planning system 34 may be a graph-based planning system.
As the robotic vehicle 18 transitions along the path 40, a sensor 22 or 28 may be actuated to look in a particular direction to improve the view of the environment. For example, looking in the direction of travel of the robotic vehicle 18 to recede the horizon ahead of the vehicle.
If the planning system 34 is unable to reach the destination or it's more power-efficient to move the sensor 22 due to proximity to the requested viewpoint 14a, 14b, the sensor 22 may be actuated (with respect to the body 20) to place the viewpoint 14a, 14b within
the field of view of the sensor 22.
An optional interpolation at time (t) intervals along the path 40 may determine one or more intermediate target poses 41a, 41b, and 41c, where all intermediate target poses 41a, 41b, and 41c, are determined in a similar way to the initial or final target pose.
This ensures that during transit the robotic vehicle 18 moves towards the final target pose 38b whilst meeting other inspection criteria including but not limited to observing the surface of the environmental component 37, observing specific targets, keeping at a desired working distance from the surface of the environmental component 37, and/or reaching the final target pose 38b as fast as possible, etc. The robotic vehicle 18 may disengage and pause the local 3D reconstruction 12a if the end of the environmental component 37 is reached and/or a determination that the entire environmental component 37 has been sufficiently surveyed. Alternatively or in addition, the robotic vehicle 18 may include the ground as part of the survey such that a path is produced which can be evaluated to ensure that each data point along the path has a field of view of the sensor 22 which may then intersect with a portion of the existing 3D reconstruction. Alternatively or in addition, a viewpoint selected by an operator may only be accepted if it is determined that the viewpoint is accessible by the robotic vehicle 18.
The planning system 34 may calculate the path 40 based on a PRM or PRM* algorithm, for example using OMPL, sampling points randomly, thereby creating a graph of free space which may define the path 40. The path 40 may be calculated based on solving a path optimisation distance, the local or external 3D reconstruction, a previously travelled route, and/or one or more landmarks.
The propulsion system 26 is configured to manoeuvre the robotic vehicle 18 to the desired pose, e.g., the target pose 38b. The navigation computing system 32 may determine a navigation strategy (e.g., path 40) and generates a command signal to the propulsion system 26. The propulsion system 26 may comprise vehicle actuators, and the command signal commands the vehicle actuators such that the vehicle remains as close to the strategy as possible. The command signal could be a thrust command with 6 degrees of freedom for control in translation in the x, y and z axis and rotations about the roll pitch and yaw. Through iterative updates, the navigation system can adapt to real time changes (in location and environment) whilst continuously updating the local 3D reconstruction 12a and also refining the path based on the available data (e.g., sensor 22, additional sensor 28, and/or pose sensor 24). Advantageously, the robotic vehicle 18 may be particularly responsive to a changing subsea environment (such as environmental perturbations such as changes in water current direction) to ensure that the robotic vehicle 18 accurately follows the navigation strategy. In particular, the vehicle computing device 33 may generate the command signal, such that the on-board sensors (e.g., sensor 22, additional sensor 28, and/or pose sensor 24) provide real-time feedback for closed loop control of the population system 26. This provides improved stabilisation which may provide reliable and consistent subsea traversal, improving both navigation and the quality of the local 3D reconstruction when compared to a highly skilled pilot providing command signals via a joystick over a high latency link.
Advantageously, by setting specific inspection criteria, the planning system 34 may calculate paths between many viewpoints with a robust and repeatable method. Thus, the robotic vehicle 18 may conduct a survey while ensuring robust and repeatable views of a given environment through the automated control of the robotic vehicle 18 at the user terminal computing device 10. In particular, this can aid a Visual SLAM system (e.g., onboard the robotic vehicle 18) in generating the local 3D reconstruction where the performance of the VSLAM system is benefitted from repeatable view perspectives, which can reduce ambiguity when tracking features from multiple viewing angles.
Figure 4 shows a virtual subsea environment 36a as determined by the navigation computing system 32 with a representation of the robotic vehicle 18. The local 3D reconstruction 12a is shown as a point cloud of many local 3D data points, of which, the local viewpoint 14a is represented by eight circled local 3D data points. The eight circled local 3D data points of the local viewpoint 14a define a surface area of a surface. The normal 35 is calculated based on the average plane of the surface area. The body 20 of the robotic vehicle 18 is positioned at the final target pose 38b. The sensor 22 has a field of view as shown by dashed lines 42a, 42b, and the local viewpoint 14a is within the field of view of the sensor 22.
Figure 5 shows a method of manoeuvring a subsea robotic vehicle 18 in an environment 36. The robotic vehicle may be the robotic vehicle as described in relation to Figures 1 to 4 or may be any robotic vehicle comprising a body and a sensor for underwater mapping (the sensor having a pose with respect to the body and a field of view, FOV).
The method may be computer implemented and may be distributed across many geographical locations via servers and network connections, or may be local and implemented on a vehicle computing device 33 of the subsea robotic vehicle 18. This process is implemented generally at 60.
At step 62, the computing system receives a viewpoint signal comprising a 3D representation of a viewpoint location 14 within an environment. Optionally, the computing system also receives sensor data representing range measurements from the sensor 22, 28 of the robotic vehicle 18.
At step 64, the computing system executes a local 3D reconstruction algorithm to generate a local 3D reconstruction 12a of the environment, the local 3D reconstruction comprises a plurality of 3D data points. Optionally, the local 3D reconstruction algorithm generates the local 3D reconstruction 12a from the sensor data.
At step 66, the computing system queries the local 3D reconstruction 12a in response to receiving the viewpoint signal to obtain a 3D data point within the local 3D reconstruction 12a. The 3D data point corresponds to the viewpoint location 14.
At step 68, the computing system generates a surface on which the 3D data point is located.
At step 70, the computing system calculates the normal to the surface.
At step 72, the computing system determines a first distance 39 from the surface in the direction of the surface normal to define an initial target pose 38a.
At step 74, the computing system obtains a sensor pose representing the pose of the sensor 22 of the subsea robotic vehicle 18, and a body pose representing the pose of the body 20 of the subsea robotic vehicle 18. The sensor pose may be obtained by receiving the pose of the body 20 and determining the pose of the sensor based on a known relationship between the pose of the body 20 and the pose of the sensor 22 (e.g., by calculating a forward kinematic transform).
At step 76, the computing system determines a final target pose 38b, based on the initial target pose 38a and the sensor pose, such that the 3D data point is within an estimated FOV of the sensor 22 when the pose of the body 20 is at the final target pose 38b. The estimated FOV of the sensor 22 may be determined based on known sensor characteristics (e.g., the focal length of the lens, the sensor size, etc.). Therefore, when the body of the robotic vehicle 18 is positioned at the final pose 38b within the environment, the sensor 22 can view/measure the target viewpoint 14b.
At step 78, the computing system calculates a path 40 between the pose of the body 20 of the robotic vehicle 18 and the final target pose 38b.
At step 80, the computing system generates a command signal to cause the body 20 of the robotic vehicle 18 to move based on the path 40 to arrive at the final target pose 38b. The command signal may be applied to a propulsion system 26 of the robotic vehicle 18.
Advantageously, the method 60 may guarantee the sensor's view of a given viewpoint will be calculated in the same way each time. This can aid a Visual SLAM system where the performance of the system is benefitted from repeatable view perspectives, which can reduce ambiguity when tracking features from multiple viewing angles.
Figures 6 and 7 show alternative navigation systems 90, 100 for manoeuvring a subsea robotic vehicle 18 in an environment in accordance with the invention. The navigation systems 90, 100 show all of the features of the navigation system 1, with certain alternatives. The same reference numerals are used to denote the same/corresponding features in relation to Figure 1 and will not be described in detail again below.
Figure 6 shows a navigation system 90 where the navigation computing system comprises additional computing resources 92 in addition to the vehicle computing device 33. The method 60 may be split between the additional computing resources and the vehicle computing device 33. For example, the local 3D reconstruction 12a and/or the planning system 34 may be implemented on the additional computing resources 92. The additional computing resources 92 may be one or more computing devices, such as computers, servers, processors, etc. The computing resources 92 may comprise a transceiver 94 suitable for communicating with transceivers 29, 31 of the navigation system 90.
Figure 7 shows a navigation system 100 comprising a distinct navigation computing system 32 which is distinct from the robotic vehicle 18 (and the vehicle computing device 33). The command signal may define the final target pose 38b and/or the path 40, and the vehicle computing system 33 may be configured to determine the propulsion system commands suitable for commanding the propulsion system 26 to move the body 20 of the robotic vehicle based on the path 40. The navigation computing system 32 may comprise a transceiver 94 suitable for communicating with transceivers 29, 31 of the navigation system 90.
Figure 7 also shows an example of the navigation computing system 32 in communication with two user terminal computing devices 10a, 10b, instead of the one user terminal computing device 10 as shown in Figures 1 and 6. In any embodiment, the navigation computing system 32 may be in communication with one or more user terminal computing devices.
Advantageously, a method of adjusting thrust demands by a joystick to control a robotic vehicle with a requirement for real time feedback, may be replaced with a control method that only requires a decision on what to view. Therefore, an operator may control the sensor position only, and the robotic vehicle 18 may be stabilised by the onboard computer (i.e., vehicle computing device 33). The operator's control signal can be executed locally or remotely (e.g onshore) over high latency link.
In any embodiment, some or all of the components of the robotic vehicle 18 may be contained within a casing defining a sealed interior space for isolating the components from the subsea or other harsh environment, as described in PCT/EP2019/079847 for example.
In any embodiment, navigation computing system 32 may receive many requested viewpoints which one or more operators select. The planning system 34 may determine many local viewpoints (e.g., 14a). The planning system 34 may order the local viewpoints based on the shortest route, based on the current pose of the body 20 of the robotic vehicle 18 and the location of each local viewpoint. Alternatively, each local viewpoint has an associated priority which determines the order in which the path 40 navigates to each local viewpoint. For example, the path 40 may be determined such that the first local viewpoint visited by the robotic vehicle 18 is the local viewpoint with the highest priority, and the last local viewpoint visited by the robotic vehicle 18 is the local viewpoint with the lowest priority. The priority of each local viewpoint may be inherited from a priority associated with each operator (i.e., each operator may have an operator privilege). Alternatively, the planning system 34 may order the local viewpoints based on the location of each local viewpoint and a priority associated with each local viewpoint.
In any embodiment, the planning system 34 may receive additional objectives for the robotic vehicle 18 which may be used to update the path 40.
In any embodiment, the planning system 34 may be located alongside the user terminal computing device 10, elsewhere (as in Figure 6) or located on the vehicle itself (as in Figure 1). In the latter case, the planning system 34 can have a separate map representing the remote 3D reconstruction 12 optimised for querying the intersection of the environmental component 37 being surveyed and potential vehicle poses.
In any embodiment, the pose of the sensor 22 may be fixed and predetermined with respect to the pose of the body 20.
In any embodiment, the robotic vehicle 18 may determine the pose of the body 20 based on measurements from the sensor 22, 28.
In any embodiment, the pose sensor 24 may be at least one of: a DVL (Doppler Velocity Log) sensor, an INS (Inertial Navigation System), an AHRS (Altitude and Heading Reference System), and/or any pose module which produce navigational/pose information required for a 3D reconstruction algorithm.
In any embodiment, the navigation computing system 32, vehicle computing device 33, additional computing resources 92, and/or the user terminal computing device 10 can each be one distinct computer, controller, or processing core/unit, such that processing can be executed using threads, or can be distinct computers or processing cores/units on the same computer, such that processing can be executed in parallel, i.e., processed simultaneously. The navigation computing system 32, vehicle computing device 33, additional computing resources 92, and/or the user terminal computing device 10 can each be coupled to other devices such as network interfaces (not shown) and volatile and non-volatile memory (not shown). The memory can store sensor data, images, video, or metadata, as well as the algorithms. The navigation computing system 32, vehicle computing device 33, additional computing resources 92, and/or the user terminal computing device 10 can each be suitable for computationally complex image processing and computer vision algorithms. For example, the navigation computing system 32, vehicle computing device 33, additional computing resources 92, and/or the user terminal computing device 10 can comprise one or more processing cores including (but not requiring) GPU cores and embedded processing for video codecs or AI operations, such as a NVidia Tegra (TM) system on a chip (SoC).
In any embodiment, the additional sensor 28 may be an Multi Beam Echo Sounders (MBES) sensor, sonar, lidar, camera, or other non-camera based range sensor. The optional additional sensor 28 may improve reconstruction accuracy and reduces reconstruction time. The sensor 22 may be a range sensor, and the additional sensor 28 may be a second range sensor distinct from the sensor 22 having a different field of view with respect to the sensor 22.
In any embodiment, the transceivers 29, 31, 94, 104 are optional, and may be used if wireless communication is required. In Figure 1, the transceivers 29, 31 may define a bidirectional data link between the robotic vehicle 18 and the user terminal computing device 10. In Figure 6, the transceivers 29, 94, and/or 94, 31 may define a bidirectional data link between the user terminal computing device 10 and the additional computing resources 92 and/or between the additional computing resources 92 and the robotic vehicle 18, respectively. In Figure 7, the transceivers 29, 104, and 104, 31 may define a bidirectional data link between the user terminal computing device 10 and the navigation computing system 32 and between the navigation computing system 32 and the robotic vehicle 18, respectively. The data link in any of the above examples can be via satellite, internet, telephone system, or the like.
In any embodiment, the body 20 may comprise any combination of: the sensor 22, the pose sensor 24, the propulsion system 26, the additional sensor 28, the transceiver 31, and/or the vehicle computing system 33. Alternatively, the sensor 22, pose sensor 24, the propulsion system 26, the additional sensor 28 may be distinct and/or separate from the body 20 of the robotic vehicle 18. Alternatively, the functionality of the sensor 22, the pose sensor 24, the optional additional sensor 28 (e.g., a range sensor), and/or the optional joint position sensor 30 may be combined in one or more physical sensors.
In any embodiment, a 3D reconstruction may be called a 3D model.
The sensor 22 has a known pose (i.e., spatial relationship) with respect to the body 20 and a field of view (FOV). The known pose may be predetermined or may be calculated/determined based on e.g., sensors, etc. In any embodiment, the local 3D reconstruction 12a may be transmitted to a remote server (such as the user terminal computing device 10) or may be saved to a memory of the navigation computing system 32 for later retrieval.
In any embodiment, the local 3D reconstruction 12a may be used as part of a subsea survey, e.g., to inspect infrastructure or natural habitats for environmental purposes. 25 Although the invention has been described above with reference to one or more preferred embodiments, it will be appreciated that various changes or modifications can be made without departing from the scope of the invention as defined in the appended claims. The word "comprising" can mean "including" or "consisting of and therefore does not exclude the presence of elements or steps other than those listed in any claim or the specification as a whole. The terms "local" and "remote" are merely labels to differentiate features. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (15)

  1. Claims 1. A navigation system for manoeuvring a subsea robotic vehicle in an environment, the navigation system comprising a user terminal computing device and a navigation system, wherein the robotic vehicle comprises: a body; a sensor for underwater mapping, the sensor having a known pose withrespect to the body and a field of view, FOV;a pose sensor for determining the pose of the body; and, a propulsion system for moving the body, wherein the user terminal computing device comprises: a first 3D reconstruction of the environment, the first 3D reconstruction comprising a plurality of first 3D data points; and a selection device configured to enable selection of one of the first 3D data points for the robotic vehicle to view with the sensor, wherein the navigation computing system comprises: a second 3D reconstruction of the environment, the second 3D reconstruction comprising a plurality of second 3D data points corresponding to the plurality of first 3D data points; and a planning system configured to: query the second 3D reconstruction in response to receiving a signal representative of the selected one of the first 3D data points, to obtain one of the second 3D data points, wherein the second 3D data point corresponds to the first 3D data point; determine a surface on which the second 3D data point is located; calculate the normal to the surface; determine a first distance from the surface in the direction of the surface normal to define an initial target pose; determine a final target pose, based on the initial target pose and the known sensor pose with respect to the body, such that the second 3D data point is estimated to be within an FOV of the sensor when the pose of the body is at the final target pose; calculate a path between the pose of the body of the robotic vehicle and the final target pose; and generate a command signal to cause the propulsion system to move the body of the robotic vehicle based on the path.
  2. 2. The navigation system of claim 1, wherein the sensor is movably coupled to the body with at least 1 degrees of freedom; and the planning system comprises a forward kinematic generator being configured to calculate the pose of the sensor relative to the pose of the body to determine the transformation between the initial target pose and the final target pose, wherein determining the final target pose comprises applying the transformation to the initial target pose.
  3. 3. The navigation system of any of claims 1 or 2, wherein a subset of the plurality of second data points define a surface area of the surface, wherein the normal to the surface is calculated based on the average plane of the surface area.
  4. 4. The navigation system of any preceding claim, wherein the final target pose is determined such that the sensor can view the surface area when the body is at the final target pose.
  5. 5. The navigation system of any preceding claim, wherein the path is calculated by defining a network of connected nodes to obtain an obstacle-free path between a node equating to the pose of the body of the robotic vehicle and a node equating to the final target pose.
  6. 6. The navigation system of any preceding claim, wherein querying the second 3D reconstruction comprises searching the second 3D reconstruction to identify the closest approximation of the one of the first 3D data points within a determined search radius, wherein the one of the second 3D data points is the closest approximation of the one of the first 3D data points.
  7. 7. The navigation system of any preceding claim, wherein the second 3D reconstruction is down sampled and transmitted to the user terminal computing device.
  8. 8. The navigation system of any preceding claim, wherein the plurality of second 3D data points are generated based on sensor data from the sensor.
  9. 9. The navigation system of claim 8, wherein the navigation computing system comprises a vehicle computing device, wherein the robotic vehicle comprises the vehicle computing device.
  10. 10. The navigation system of any of claims 8 to 9, wherein the robotic vehicle further comprises a range sensor, wherein the plurality of second 3D data points are further generated based on range sensor data from the range sensor.
  11. 11. The navigation system of any of claims 8 to 10, wherein the navigation computing system is a vehicle computing device.
  12. 12. The navigation system of any of claims 8 to 11, wherein the command signal defines the final target pose, and the navigation computing system calculates the path. 10
  13. 13. A method of navigating a subsea robotic vehicle in an environment, the robotic vehicle comprising a body; a sensor for underwater mapping, the sensor having a pose with respect to the body and a field of view, FOV, the method comprising: receiving a viewpoint signal comprising a 3D representation of a viewpoint location within an environment; executing a 3D reconstruction algorithm to generate a 3D reconstruction of the environment, the 3D reconstruction comprising a plurality of 3D data points; and querying the 3D reconstruction in response to receiving the viewpoint signal to obtain a 3D data point within the 3D reconstruction, wherein the 3D data point corresponds to the viewpoint location; determining a surface on which the 3D data point is located; calculating the normal to the surface; determining a first distance from the surface in the direction of the surface normal to define an initial target pose; obtaining a sensor pose representing the pose of the sensor of the subsea robotic vehicle, and a body pose representing the pose of the body of the subsea robotic vehicle; determining a final target pose, based on the initial target pose and the sensor pose, such that the 3D data point is within an estimated FOV of the sensor; calculating a path between the pose of the body of the robotic vehicle and the final target pose; and generating a command signal to cause the body of the robotic vehicle to move based on the path to arrive at the final target pose.
  14. 14. The method of claim 13, further comprising receiving sensor data representing range measurements from the sensor, wherein the 3D reconstruction algorithm updates the 3D reconstruction from the sensor data.
  15. 15. A subsea robotic vehicle implementing the method of one of claims 13 or 14.
GB2317696.9A 2023-11-20 2023-11-20 Robotic vehicle navigation system and method Pending GB2635677A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2317696.9A GB2635677A (en) 2023-11-20 2023-11-20 Robotic vehicle navigation system and method
PCT/GB2024/052892 WO2025109305A1 (en) 2023-11-20 2024-11-14 Robotic vehicle navigation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2317696.9A GB2635677A (en) 2023-11-20 2023-11-20 Robotic vehicle navigation system and method

Publications (1)

Publication Number Publication Date
GB2635677A true GB2635677A (en) 2025-05-28

Family

ID=93650446

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2317696.9A Pending GB2635677A (en) 2023-11-20 2023-11-20 Robotic vehicle navigation system and method

Country Status (2)

Country Link
GB (1) GB2635677A (en)
WO (1) WO2025109305A1 (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR112015025450A2 (en) * 2013-04-05 2017-07-18 Lockheed Corp undersea platform with dealing and related methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
WO2025109305A1 (en) 2025-05-30

Similar Documents

Publication Publication Date Title
US12025983B2 (en) Indicating a scan target for an unmanned aerial vehicle
US11237572B2 (en) Collision avoidance system, depth imaging system, vehicle, map generator and methods thereof
Valenti et al. Enabling computer vision-based autonomous navigation for unmanned aerial vehicles in cluttered GPS-denied environments
Larson et al. Autonomous navigation and obstacle avoidance for unmanned surface vehicles
CN111338383B (en) GAAS-based autonomous flight method and system, and storage medium
Weiss et al. Intuitive 3D maps for MAV terrain exploration and obstacle avoidance
JP7032062B2 (en) Point cloud data processing device, mobile robot, mobile robot system, and point cloud data processing method
Klingbeil et al. Towards autonomous navigation of an UAV-based mobile mapping system
Jung et al. Bathymetric pose graph optimization with regularized submap matching
Jung et al. Consistent mapping of marine structures with an autonomous surface vehicle using motion compensation and submap-based filtering
Xanthidis et al. Towards multi-robot shipwreck mapping
HK40127024A (en) Robotic vehicle navigation system and method
GB2635677A (en) Robotic vehicle navigation system and method
CN119002538A (en) Laser radar line-imitating flight method and device based on ICP
Pfingsthorn et al. Full 3D navigation correction using low frequency visual tracking with a stereo camera
Kurdi et al. Navigation of mobile robot with cooperation of quadcopter
Nornes et al. Motion control of ROVs for mapping of steep underwater walls
Li et al. Real-Time Location of Underwater Robot Grasping Based on Time Delay Compensation
US20240427343A1 (en) Mobile apparatus, method for determining position, and non-transitory recording medium
Khan et al. A Review on Data Acquisition Devices and Algorithms for 3D Mapping
Zhao Multi-Sensor Fusion for UUV Localization at the Ice-Water Interface
CN116740297A (en) UAV-based autonomous indoor facility detection and scene reconstruction exploration and development system
Typiak et al. Create Terrain Maps for Unmanned Ground Platform
Pfingsthorn et al. Full 3D Navigation Correction using Low Frequency Visual Tracking

Legal Events

Date Code Title Description
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40127024

Country of ref document: HK

732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20251218 AND 20251224