CA2950791C - Binocular visual navigation system and method based on power robot - Google Patents
Binocular visual navigation system and method based on power robot Download PDFInfo
- Publication number
- CA2950791C CA2950791C CA2950791A CA2950791A CA2950791C CA 2950791 C CA2950791 C CA 2950791C CA 2950791 A CA2950791 A CA 2950791A CA 2950791 A CA2950791 A CA 2950791A CA 2950791 C CA2950791 C CA 2950791C
- Authority
- CA
- Canada
- Prior art keywords
- image
- robot
- obstacle
- camera
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention discloses a binocular visual navigation system based on a power robot. The system comprises: an image acquisition system that controls, by means of acquisition software, a camera to acquire environmental images of a road along which the robot moves ahead, and then transmits the acquired images to a visual analysis system via a wire; the visual analysis system that detects an obstacle according to binocular image information and the information of intrinsic and extrinsic parameters of the camera and transmits the information to a robot control system; and a path planning system that builds a two-dimensional occupancy map according to the environmental information acquired by the image acquisition system, plans a path and immediately adjusts a traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle. The present invention also discloses a visual navigation method, which is intended for avoiding collision of a power robot with obstacles, thereby allowing enhancement of the adaptive ability of the robot to the environment, actual achievement of an autonomous navigation function of the power robot in an outdoor complex environment and improvement of the flexibility and safety thereof.
Description
Description BINOCULAR VISUAL NAVIGATION SYSTEM AND METHOD BASED ON
POWER ROBOT
Field of the Invention The present invention relates to a binocular visual navigation system and method, in particular to a binocular visual navigation system and method based on a power robot.
Background of the Invention With the continuous development of social economy, the electricity consumption grows rapidly, leading to sharp increases in the lengths of power transmission lines and the number of transformer substations. The security detection on the transformer substations and the power transmission lines mostly relies on manual work.
Manual inspection for power equipment is already out of step with the times due to high labor intensity and low efficiency. In order to ensure safe and reliable power supply, automation and modernization in operation and maintenance of the power equipment have become increasingly urgent. With the rapid development of modern technologies, various power robots emerge as the times require, which can perform the inspection operation instead of or in addition to manual work, allowing improvement of the working efficiency, reduction of the labor intensity and decrease of the operation risks.
A new idea is provided for the unattended operation and automatic operation and maintenance of the transformer substations.
An existing power machine acquires equipment images on the basis of a fixed inspection route and fixed inspection positions. A power robot platform utilizes a magnetic sensor to realize navigation. Such a navigation mode has the advantages of being reliable and stable, and the disadvantages of high cost and insufficient flexibility. After a navigation route is determined, manual on-site construction is required to bury a magnetic track; once a magnetic trajectory route is implemented, it is difficult to change. In addition, the region for burying the magnetic track is limited in the environment of a transformer substation, which becomes a major reason of constraining the traveling range of a robot. Thus, it results in that a robot must acquire images at a position far away from equipment, which causes many problems for subsequent image processing and analysis. Additionally, as the environment of a transformer substation is an unstructured environment which people get into aperiodically for equipment maintenance and which vehicles may also enter, a new generation power robot must have the abilities of being harmless to people and avoiding vehicles on the road along which it moves ahead. Therefore, it is an important aspect in improving the inspection performance of a power robot and also an important aspect in improving the intelligence level of the robot to add a more flexible environmental perception mode to the robot.
For the environmental perception of a power robot, an ultrasonic sensor is utilized at present to achieve detection on a near obstacle. The ultrasonic sensor fails to completely meet the actual demands of the power robot having a requirement on a certain height range because measurement performed by the ultrasonic sensor is based on a scanning line of a fixed height. Therefore, it is a problem to be solved for the power robot to develop an environmental perception technology suitable for the environment of a transformer substation and allowing a large detection range of the power robot.
With the development of the sensor technology and the processor technology, the robot technology is developing towards more and more intelligence. The formation and development of the computer vision theory allow a robot to have a visual system similar to human's eyes such that the robot can obtain more environmental information according to the visual system. A visual navigation technology mainly includes monocular vision based, binocular stereo vision based, tri-ocular or multi-ocular camera structures. Monocular vision allows identification of an identifier in a structured scene mainly by using image information acquired by a camera; the direct use of the image plane information for judging the scene is also conducive to navigation based on a monocular camera structure by using a motion based three-dimensional information recovery method. The stereo vision technology mainly utilizes three-dimensional information obtained by reconstruction of two or more images, and allows obstacle detection and road condition detection on this basis and then eventual realization of functions such as obstacle avoidance and navigation. For example, the early Mars rover utilized the binocular stereo vision technology for visual obstacle avoidance and navigation.
The obstacle avoidance function is an essential function of an intelligent mobile robot.
A robot based on this function can respond in real time to changing environments to avoid the risks of collision, and can automatically avoid obstacles and continue to
POWER ROBOT
Field of the Invention The present invention relates to a binocular visual navigation system and method, in particular to a binocular visual navigation system and method based on a power robot.
Background of the Invention With the continuous development of social economy, the electricity consumption grows rapidly, leading to sharp increases in the lengths of power transmission lines and the number of transformer substations. The security detection on the transformer substations and the power transmission lines mostly relies on manual work.
Manual inspection for power equipment is already out of step with the times due to high labor intensity and low efficiency. In order to ensure safe and reliable power supply, automation and modernization in operation and maintenance of the power equipment have become increasingly urgent. With the rapid development of modern technologies, various power robots emerge as the times require, which can perform the inspection operation instead of or in addition to manual work, allowing improvement of the working efficiency, reduction of the labor intensity and decrease of the operation risks.
A new idea is provided for the unattended operation and automatic operation and maintenance of the transformer substations.
An existing power machine acquires equipment images on the basis of a fixed inspection route and fixed inspection positions. A power robot platform utilizes a magnetic sensor to realize navigation. Such a navigation mode has the advantages of being reliable and stable, and the disadvantages of high cost and insufficient flexibility. After a navigation route is determined, manual on-site construction is required to bury a magnetic track; once a magnetic trajectory route is implemented, it is difficult to change. In addition, the region for burying the magnetic track is limited in the environment of a transformer substation, which becomes a major reason of constraining the traveling range of a robot. Thus, it results in that a robot must acquire images at a position far away from equipment, which causes many problems for subsequent image processing and analysis. Additionally, as the environment of a transformer substation is an unstructured environment which people get into aperiodically for equipment maintenance and which vehicles may also enter, a new generation power robot must have the abilities of being harmless to people and avoiding vehicles on the road along which it moves ahead. Therefore, it is an important aspect in improving the inspection performance of a power robot and also an important aspect in improving the intelligence level of the robot to add a more flexible environmental perception mode to the robot.
For the environmental perception of a power robot, an ultrasonic sensor is utilized at present to achieve detection on a near obstacle. The ultrasonic sensor fails to completely meet the actual demands of the power robot having a requirement on a certain height range because measurement performed by the ultrasonic sensor is based on a scanning line of a fixed height. Therefore, it is a problem to be solved for the power robot to develop an environmental perception technology suitable for the environment of a transformer substation and allowing a large detection range of the power robot.
With the development of the sensor technology and the processor technology, the robot technology is developing towards more and more intelligence. The formation and development of the computer vision theory allow a robot to have a visual system similar to human's eyes such that the robot can obtain more environmental information according to the visual system. A visual navigation technology mainly includes monocular vision based, binocular stereo vision based, tri-ocular or multi-ocular camera structures. Monocular vision allows identification of an identifier in a structured scene mainly by using image information acquired by a camera; the direct use of the image plane information for judging the scene is also conducive to navigation based on a monocular camera structure by using a motion based three-dimensional information recovery method. The stereo vision technology mainly utilizes three-dimensional information obtained by reconstruction of two or more images, and allows obstacle detection and road condition detection on this basis and then eventual realization of functions such as obstacle avoidance and navigation. For example, the early Mars rover utilized the binocular stereo vision technology for visual obstacle avoidance and navigation.
The obstacle avoidance function is an essential function of an intelligent mobile robot.
A robot based on this function can respond in real time to changing environments to avoid the risks of collision, and can automatically avoid obstacles and continue to
2 travel along an original path; then, the autonomy and the system security of the robot are enhanced. The obstacle avoidance function mainly includes two parts of contents:
obstacle detection and path planning. More specifically, the obstacle detection is to obtain the information, such as position and size, of an obstacle by processing and analyzing information acquired by a sensor; the path planning is to build a map using the obstacle information detected at present and other road information, and plan a path available for traveling to provide the path along which a robot can travel. At present, obstacle detection methods based on a visual system can be divided into an obstacle detection method based on three-dimensional information recovery, an obstacle detection method based on inverse projection and an obstacle detection method based on disparity histograms.
By analyzing binocular image information acquired by a power robot while traveling in the prior art, the following problems are found:
(1) much equipment on both sides of the traveling path of the robot results in a complex image background, leading to a case that an obstacle overlaps an equipment region, which is tremendously difficult for operations such as target extraction and segmentation;
(2) the road region information will be affected by outdoor illumination variation, special weather and the like, leading to interference information on the road surface and easy introduction of obstacle detection errors.
Summary of the Invention An objective of the present invention is to solve the above problems and provide a binocular visual navigation system and method based on a power robot with the advantages of no influence on normal operation of equipment in a transformer substation, analysis based on image information, abundant information amount,low cost and easy popularization.
To achieve the above objective, the present invention involves technical solutions as described below.
A binocular visual navigation system based on a power robot comprises:
an image acquisition system that comprises a binocular camera which is connected to an image acquisition card by means of an image transmission wire and used to acquire environmental images of a road along which the power robot moves ahead and then upload via the image transmission wire the acquired images to the image acquisition
obstacle detection and path planning. More specifically, the obstacle detection is to obtain the information, such as position and size, of an obstacle by processing and analyzing information acquired by a sensor; the path planning is to build a map using the obstacle information detected at present and other road information, and plan a path available for traveling to provide the path along which a robot can travel. At present, obstacle detection methods based on a visual system can be divided into an obstacle detection method based on three-dimensional information recovery, an obstacle detection method based on inverse projection and an obstacle detection method based on disparity histograms.
By analyzing binocular image information acquired by a power robot while traveling in the prior art, the following problems are found:
(1) much equipment on both sides of the traveling path of the robot results in a complex image background, leading to a case that an obstacle overlaps an equipment region, which is tremendously difficult for operations such as target extraction and segmentation;
(2) the road region information will be affected by outdoor illumination variation, special weather and the like, leading to interference information on the road surface and easy introduction of obstacle detection errors.
Summary of the Invention An objective of the present invention is to solve the above problems and provide a binocular visual navigation system and method based on a power robot with the advantages of no influence on normal operation of equipment in a transformer substation, analysis based on image information, abundant information amount,low cost and easy popularization.
To achieve the above objective, the present invention involves technical solutions as described below.
A binocular visual navigation system based on a power robot comprises:
an image acquisition system that comprises a binocular camera which is connected to an image acquisition card by means of an image transmission wire and used to acquire environmental images of a road along which the power robot moves ahead and then upload via the image transmission wire the acquired images to the image acquisition
3 card which then transmits the acquired environmental images of the road to a visual analysis system;
the visual analysis system that achieves detection on obstacles within a road region of a transformer substation by means of the inverse projection theory and the three-dimensional reconstruction technology according to binocular image information acquired by the image acquisition system and information of intrinsic and extrinsic parameters of the camera, and transmits the information to a path planning system;
the path panning system that builds a two-dimensional occupancy map according to the environmental information acquired by the image acquisition system, plans a path and immediately adjusts a traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle; and a motion control system that controls the body of the robot to move according to the path planned by the path planning system.
The binocular camera has two optical axes parallel to each other and a connecting line of two optical centers parallel to ground, and is mounted on a body of the power robot by means of a mounting support that is a camera holding platform. An optical axis orientation of the binocular camera is set to be parallel to a Y axis of a coordinate system for the robot. The camera holding platform rotates about a fixed axis.
A visual navigation method for the binocular visual navigation system based on a power robot comprises the following specific steps:
step 1, acquiring binocular environmental images, and obtaining binocular images without distortion and with pixel matching relations constrained to the same X
axis via image parsing, distortion rectification and stereo rectification;
step 2, carrying out inverse projection transformation on the rectified images, projecting a left ocular image and a right ocular image to a ground plane, carrying out pixel subtraction on the re-projected left ocular image and right ocular image and carrying out Canny edge detection on a difference image, and then using a Hough straight line to detect a road region and road edges;
step 3, after determining the road region and the road edges, determining a matching relation of the left ocular image and the right ocular image according to a gray region correlation of regions in regions of interest of the images; then generating a disparity image according to the matching relation of the left ocular image and the right ocular image, calculating histograms of the disparity image, carrying out histogram segmentation on the disparity image and judging whether an obstacle is present in the
the visual analysis system that achieves detection on obstacles within a road region of a transformer substation by means of the inverse projection theory and the three-dimensional reconstruction technology according to binocular image information acquired by the image acquisition system and information of intrinsic and extrinsic parameters of the camera, and transmits the information to a path planning system;
the path panning system that builds a two-dimensional occupancy map according to the environmental information acquired by the image acquisition system, plans a path and immediately adjusts a traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle; and a motion control system that controls the body of the robot to move according to the path planned by the path planning system.
The binocular camera has two optical axes parallel to each other and a connecting line of two optical centers parallel to ground, and is mounted on a body of the power robot by means of a mounting support that is a camera holding platform. An optical axis orientation of the binocular camera is set to be parallel to a Y axis of a coordinate system for the robot. The camera holding platform rotates about a fixed axis.
A visual navigation method for the binocular visual navigation system based on a power robot comprises the following specific steps:
step 1, acquiring binocular environmental images, and obtaining binocular images without distortion and with pixel matching relations constrained to the same X
axis via image parsing, distortion rectification and stereo rectification;
step 2, carrying out inverse projection transformation on the rectified images, projecting a left ocular image and a right ocular image to a ground plane, carrying out pixel subtraction on the re-projected left ocular image and right ocular image and carrying out Canny edge detection on a difference image, and then using a Hough straight line to detect a road region and road edges;
step 3, after determining the road region and the road edges, determining a matching relation of the left ocular image and the right ocular image according to a gray region correlation of regions in regions of interest of the images; then generating a disparity image according to the matching relation of the left ocular image and the right ocular image, calculating histograms of the disparity image, carrying out histogram segmentation on the disparity image and judging whether an obstacle is present in the
4 disparity image; if so, going to step 4, otherwise, going back to the step 1;
step 4, determining three-dimensional information of the obstacle according to an obstacle region obtained by segmentation and camera calibration information, and determining a size and an average distance of the obstacle region according to the three-dimensional information of the obstacle; and step 5, transmitting the detected obstacle information to the robot control system, updating a map according to the new obstacle information, planning, by the path planning system, a next moving direction of the robot according to existing path information, and inputting, by the robot control system, a speed into a mobile platform driver according to a current traveling direction of the robot to allow the robot to move; if a next step of operation cannot be executed, stopping the robot and reporting a signal to an upper computer; otherwise, repeating the step 1.
The step 1 comprises the following specific steps:
(1-1) acquiring, by the binocular camera, the environmental image information;
(1-2) carrying out distortion rectification and stereo rectification on the left ocular image and the right ocular image acquired during traveling of the power robot according to intrinsic parameters K1, Kr of the binocular camera, relative position relations R, T of the binocular camera and distortion parameters (k1, k2, k3, pl, P2) obtained by calibration; and (1-3) carrying out inverse projection transformation on the rectified left ocular image and right ocular image, re-projecting the binocular images to the ground plane, wherein the inverse projection transformation is determined via the parameters kb 1(2, R, T of the binocular camera and a rotation matrix Rw and a translation matrix T, of a coordinate system for a reference camera relative to a world coordinate system for the ground plane;
assuming the intrinsic parameters of a current left or right ocular camera to be K.--If s ou f. :1 IO II I with fu and 1; being a horizontal focal length and a longitudinal focal length, uo, vo representing a main point position in an image plane and s being a pixel aspect ratio, a rotation matrix and a translation matrix of the current camera relative to the world coordinate system to be Rw and T. space pixel coordinates in the image plane to be (u, v) and target coordinates on the ground plane under the world coordinate system to be (X, Y, Z), and given that a height of the optical centers of the binocular camera relative to the ground plane is H and a pitching included angle for the optical centers of the binocular camera relative to the ground plane is 0, defining the coordinate system for the power robot as 02 and P
- ground as an equation of the ground plane under the coordinate system for the robot, and defining the equation of the ground plane according to the real environment of the transformer substation as Z=0, PRnnd 01 g mi then reaching -0 0 1 obtaining a projection relation between the image plane and the ground plane according to a projection relation of the binocular camera, as represented by homogeneous coordinates as follows:
-X --v = KIR I T11/
Z Z
Lij The step 2 comprises the following specific steps:
(2-1) inversely projecting the left ocular image and the right ocular image to the world coordinate system, and still obtaining road edge information kept in a parallel relation according to a mapping relation between the planes; and (2-2) after obtaining an inverse projection matrix of the binocular camera, inversely projecting the left ocular image and the right ocular image to the world coordinate system to obtain images ImgLremap and ImgRremap; then carrying out difference calculation on the images ImgLremap and ImgRremap to obtain Imagemerence', filtering overlap information of the images ImgLremap and ImgRremap in the world coordinate system, and extracting region information with the road edges not overlapping the obstacle region; using the Hough transformation detection straight line for setting up constraints according to a quadrant direction, a length and a position of the straight line, and extracting a straight line equation of the road edges on both sides under the coordinate system for the camera; carrying out calculation according to a coordinate transformation relation of the camera relative to the coordinate system for the robot to obtain road information under the coordinate system for the robot, and providing the reference road information to the path planning system for path planning.
The step 3 comprises the following specific steps:
(3-1) calculating a pixel matching relation between the binocular images based on an SAD gray correlation between pixels according to the intrinsic and extrinsic parameters between the coordinates of the binocular camera, thereby obtaining the disparity image Idisparity;
(3-2) assuming a window size between an image I(x,y) and an image J(x,y) to be the SAD gray correlation of (w,w), then obtaining the SAD correlation between a point (x, y) in the image I(x,y) and a point (x', y') in the image J(x,y) as follows:
sAD(x,y)=Ei= __________ + y + j) j(x i,y + je o;
searching for each pixel point (xr, y) at the same longitudinal coordinate in the right ocular image according to a pixel (xl, y) in the space of the left ocular image, selecting points having higher similarities as candidate matching points by determining an SAD similarity between every two pixel points, and then obtaining the final matching relation according to sequential and unique constraints;
(3-4) then calculating the histograms of the disparity image, followed by a Gaussian smoothing operation on the histograms, wherein a method of calculating the disparity histograms is to accumulate the number of the same pixels among all pixels in an image to generate a one-dimensional array for recording a probability of occurrence of each gray value in a two-dimensional image;
the smoothing process is as follows:
zgirpa P(x) , __ e ;
zIta-wherein P(x) is a value after filtering; x is a value of each unit of the histograms;
(A- . er) is the mean and variance of the Gaussian function; and (3-5) using a straight line having a given slope to detect a histogram interval above the straight line according to the smoothed histograms, thus obtaining an initial detection result for an obstacle.
The step 4 comprises the following specific steps:
(4-1) after obtaining the matching relation of the obstacle region, obtaining three-dimensional point coordinates of the obstacle region according to the triangle location principle on the basis of the known intrinsic and extrinsic parameters of the binocular camera;
(4-2) constructing a three-dimensional reconstruction equation according to the matching relation between the left and right ocular images and the intrinsic and extrinsic parameters of the binocular camera, and obtaining the three-dimensional point coordinates of the obstacle region:
-1 0 0 ¨u0 0 1 0 ¨v.0 Q =
o a I _0 0z d ¨.Tx (10 ¨ uoi)T. 1-wherein uo and vo are rectified horizontal and vertical coordinates of a main point of the reference camera; uo' is a horizontal coordinate of a main point of another camera;
Tx is a baseline distance between the two cameras; (X, Y, Z) is three-dimensional point coordinates under a three-dimensional coordinate system with a camera as an origin; (u, v, d) is image coordinate values and a corresponding disparity value; and (4-3) after obtaining the three-dimensional points of the obstacle region in the image, fitting planes to which three-dimensional points of connected regions belong according to the distribution of the three-dimensional points of the obstacle, creating a minimum enclosing rectangle and calculating a centroid, thereby eventually determining an actual size and a specific position of the obstacle.
The step 5 comprises the following specific steps:
(5-1) mapping the three-dimensional point coordinates of the obstacle to the world coordinate system, building a grid map within a certain range with a midpoint in the connecting line of the optical centers of the binocular camera as an origin, and carrying out grid filling according to the number of projections of the three-dimensional points on the ground plane;
(5-2) carrying out path planning for the power robot using the Markov path planning algorithm according to the local two-dimensional occupancy map and known global target points as well as the position of the power robot in a global map, thereby obtaining an obstacle-avoided path; and (5-3) planning, by the path planning system, a next moving direction for the robot, and inputting, by the robot control system, a speed into the mobile platform driver according to the current traveling direction of the robot to allow the robot to move.
The present invention has the following beneficial effects.
(1) The present invention is based on the binocular visual system, and may provide a deviation of a heading angle of the robot to adjust the traveling path of the robot without recovery of three-dimensional information and through the use of the inverse projection algorithm and the extraction of the road edge information;
(2) The present invention also provides the obstacle detection based on the disparity histogram segmentation, thereby realizing automatic detection on an obstacle appearing in the road region and having a certain distance above the ground and providing sufficient information for autonomous obstacle avoidance and navigation of the robot;
(3) Based on the present invention, the robot can automatically detect any obstacle in the traveling direction, and automatically make responses such as stopping, avoiding and alarming according to the size of the position of the obstacle to avoid collision with the obstacle; thus, the adaptive ability of the robot to the environment is enhanced with actual achievement of the autonomous navigation function of the power robot in an outdoor complex environment and improvement of the flexibility and safety thereof;
(4) The present invention utilizes a non-contact environmental information perception technology without affecting normal operation of the equipment in a transformer substation;
and
step 4, determining three-dimensional information of the obstacle according to an obstacle region obtained by segmentation and camera calibration information, and determining a size and an average distance of the obstacle region according to the three-dimensional information of the obstacle; and step 5, transmitting the detected obstacle information to the robot control system, updating a map according to the new obstacle information, planning, by the path planning system, a next moving direction of the robot according to existing path information, and inputting, by the robot control system, a speed into a mobile platform driver according to a current traveling direction of the robot to allow the robot to move; if a next step of operation cannot be executed, stopping the robot and reporting a signal to an upper computer; otherwise, repeating the step 1.
The step 1 comprises the following specific steps:
(1-1) acquiring, by the binocular camera, the environmental image information;
(1-2) carrying out distortion rectification and stereo rectification on the left ocular image and the right ocular image acquired during traveling of the power robot according to intrinsic parameters K1, Kr of the binocular camera, relative position relations R, T of the binocular camera and distortion parameters (k1, k2, k3, pl, P2) obtained by calibration; and (1-3) carrying out inverse projection transformation on the rectified left ocular image and right ocular image, re-projecting the binocular images to the ground plane, wherein the inverse projection transformation is determined via the parameters kb 1(2, R, T of the binocular camera and a rotation matrix Rw and a translation matrix T, of a coordinate system for a reference camera relative to a world coordinate system for the ground plane;
assuming the intrinsic parameters of a current left or right ocular camera to be K.--If s ou f. :1 IO II I with fu and 1; being a horizontal focal length and a longitudinal focal length, uo, vo representing a main point position in an image plane and s being a pixel aspect ratio, a rotation matrix and a translation matrix of the current camera relative to the world coordinate system to be Rw and T. space pixel coordinates in the image plane to be (u, v) and target coordinates on the ground plane under the world coordinate system to be (X, Y, Z), and given that a height of the optical centers of the binocular camera relative to the ground plane is H and a pitching included angle for the optical centers of the binocular camera relative to the ground plane is 0, defining the coordinate system for the power robot as 02 and P
- ground as an equation of the ground plane under the coordinate system for the robot, and defining the equation of the ground plane according to the real environment of the transformer substation as Z=0, PRnnd 01 g mi then reaching -0 0 1 obtaining a projection relation between the image plane and the ground plane according to a projection relation of the binocular camera, as represented by homogeneous coordinates as follows:
-X --v = KIR I T11/
Z Z
Lij The step 2 comprises the following specific steps:
(2-1) inversely projecting the left ocular image and the right ocular image to the world coordinate system, and still obtaining road edge information kept in a parallel relation according to a mapping relation between the planes; and (2-2) after obtaining an inverse projection matrix of the binocular camera, inversely projecting the left ocular image and the right ocular image to the world coordinate system to obtain images ImgLremap and ImgRremap; then carrying out difference calculation on the images ImgLremap and ImgRremap to obtain Imagemerence', filtering overlap information of the images ImgLremap and ImgRremap in the world coordinate system, and extracting region information with the road edges not overlapping the obstacle region; using the Hough transformation detection straight line for setting up constraints according to a quadrant direction, a length and a position of the straight line, and extracting a straight line equation of the road edges on both sides under the coordinate system for the camera; carrying out calculation according to a coordinate transformation relation of the camera relative to the coordinate system for the robot to obtain road information under the coordinate system for the robot, and providing the reference road information to the path planning system for path planning.
The step 3 comprises the following specific steps:
(3-1) calculating a pixel matching relation between the binocular images based on an SAD gray correlation between pixels according to the intrinsic and extrinsic parameters between the coordinates of the binocular camera, thereby obtaining the disparity image Idisparity;
(3-2) assuming a window size between an image I(x,y) and an image J(x,y) to be the SAD gray correlation of (w,w), then obtaining the SAD correlation between a point (x, y) in the image I(x,y) and a point (x', y') in the image J(x,y) as follows:
sAD(x,y)=Ei= __________ + y + j) j(x i,y + je o;
searching for each pixel point (xr, y) at the same longitudinal coordinate in the right ocular image according to a pixel (xl, y) in the space of the left ocular image, selecting points having higher similarities as candidate matching points by determining an SAD similarity between every two pixel points, and then obtaining the final matching relation according to sequential and unique constraints;
(3-4) then calculating the histograms of the disparity image, followed by a Gaussian smoothing operation on the histograms, wherein a method of calculating the disparity histograms is to accumulate the number of the same pixels among all pixels in an image to generate a one-dimensional array for recording a probability of occurrence of each gray value in a two-dimensional image;
the smoothing process is as follows:
zgirpa P(x) , __ e ;
zIta-wherein P(x) is a value after filtering; x is a value of each unit of the histograms;
(A- . er) is the mean and variance of the Gaussian function; and (3-5) using a straight line having a given slope to detect a histogram interval above the straight line according to the smoothed histograms, thus obtaining an initial detection result for an obstacle.
The step 4 comprises the following specific steps:
(4-1) after obtaining the matching relation of the obstacle region, obtaining three-dimensional point coordinates of the obstacle region according to the triangle location principle on the basis of the known intrinsic and extrinsic parameters of the binocular camera;
(4-2) constructing a three-dimensional reconstruction equation according to the matching relation between the left and right ocular images and the intrinsic and extrinsic parameters of the binocular camera, and obtaining the three-dimensional point coordinates of the obstacle region:
-1 0 0 ¨u0 0 1 0 ¨v.0 Q =
o a I _0 0z d ¨.Tx (10 ¨ uoi)T. 1-wherein uo and vo are rectified horizontal and vertical coordinates of a main point of the reference camera; uo' is a horizontal coordinate of a main point of another camera;
Tx is a baseline distance between the two cameras; (X, Y, Z) is three-dimensional point coordinates under a three-dimensional coordinate system with a camera as an origin; (u, v, d) is image coordinate values and a corresponding disparity value; and (4-3) after obtaining the three-dimensional points of the obstacle region in the image, fitting planes to which three-dimensional points of connected regions belong according to the distribution of the three-dimensional points of the obstacle, creating a minimum enclosing rectangle and calculating a centroid, thereby eventually determining an actual size and a specific position of the obstacle.
The step 5 comprises the following specific steps:
(5-1) mapping the three-dimensional point coordinates of the obstacle to the world coordinate system, building a grid map within a certain range with a midpoint in the connecting line of the optical centers of the binocular camera as an origin, and carrying out grid filling according to the number of projections of the three-dimensional points on the ground plane;
(5-2) carrying out path planning for the power robot using the Markov path planning algorithm according to the local two-dimensional occupancy map and known global target points as well as the position of the power robot in a global map, thereby obtaining an obstacle-avoided path; and (5-3) planning, by the path planning system, a next moving direction for the robot, and inputting, by the robot control system, a speed into the mobile platform driver according to the current traveling direction of the robot to allow the robot to move.
The present invention has the following beneficial effects.
(1) The present invention is based on the binocular visual system, and may provide a deviation of a heading angle of the robot to adjust the traveling path of the robot without recovery of three-dimensional information and through the use of the inverse projection algorithm and the extraction of the road edge information;
(2) The present invention also provides the obstacle detection based on the disparity histogram segmentation, thereby realizing automatic detection on an obstacle appearing in the road region and having a certain distance above the ground and providing sufficient information for autonomous obstacle avoidance and navigation of the robot;
(3) Based on the present invention, the robot can automatically detect any obstacle in the traveling direction, and automatically make responses such as stopping, avoiding and alarming according to the size of the position of the obstacle to avoid collision with the obstacle; thus, the adaptive ability of the robot to the environment is enhanced with actual achievement of the autonomous navigation function of the power robot in an outdoor complex environment and improvement of the flexibility and safety thereof;
(4) The present invention utilizes a non-contact environmental information perception technology without affecting normal operation of the equipment in a transformer substation;
and
(5) Analysis is made on the basis of the image information, and abundant information amount is provided. Compared with navigation modes such as laser and magnetic trajectory, such a navigation mode is low in cost and easy to popularize.
According to one aspect of the present invention, there is provided a binocular visual navigation system based on a power robot, comprising an image acquisition system that comprises a binocular camera which is connected to an image acquisition card by means of an image transmission wire and used to acquire environmental images of a road along which the power robot moves ahead and then upload via the image transmission wire the acquired images to the image acquisition card which then transmits the acquired environmental images of the road to a visual analysis system; the visual analysis system that achieves detection on obstacles within a road region of a transformer substation by means of the inverse projection theory and the three-dimensional reconstruction technology according to binocular image information acquired by the image acquisition system and information of intrinsic and extrinsic parameters of the camera, and transmits the information to a path planning system, wherein image parsing, distortion rectification and stereo rectification are carried out on the binocular image information to remove distortion and constrain pixel matching relations to a same X axis; the path panning system that builds a two-dimensional occupancy map according to the environmental information acquired from the image, plans a path and immediately adjusts a traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle; and a motion control system that controls the robot to move according to the path planned by the path planning system.
Brief Description of the Drawings Fig. I is a block diagram of a system of the present invention; and Fig. 2 is a flowchart of a system of the present invention.
1. Image acquisition system, 2. Visual analysis system, 3. Path planning system, 4. Motion control system, and 5. Body of robot.
Description of the Embodiments The present invention will be further illustrated below by combining the accompanying drawings with embodiments.
As shown in Fig. 1, an image acquisition system 1 is provided that comprises a binocular camera which is connected to an image acquisition card by means of an image transmission wire and used to acquire environmental images of a road along 9a which the power robot moves ahead and then upload via the image transmission wire the acquired images to the image acquisition card which then transmits the acquired environmental images of the road to a visual analysis system 2.
The visual analysis system 2 achieves detection on obstacles within a road region of a transformer substation by means of the inverse projection theory and the three-dimensional reconstruction technology according to binocular image information acquired by the image acquisition system 1 and information of intrinsic and extrinsic parameters of the camera, and transmits the information to a path planning system 3.
The path panning system 3 builds a two-dimensional occupancy map according to the environmental information acquired by the image acquisition system 1, plans a path and immediately adjusts the traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle.
A motion control system 4 controls the body of the robot 5 to move according to the path planned by the path planning system.
The binocular camera has two optical axes parallel to each other and a connecting line of two optical centers parallel to the ground, and is mounted on the body of the power robot by means of a mounting support that is a camera holding platform. An optical axis orientation of the binocular camera is set to be parallel to the Y axis of a coordinate system for the robot. The camera holding platform rotates about a fixed axis.
The traveling trajectory of a transformer substation inspection robot on an equipment space road is determined by means of path planning, and then the robot starts to travel.
While traveling, an onboard processor of the robot issues an instruction to turn on the binocular camera.
The binocular camera has two optical axes parallel to each other and the connecting line of two optical centers parallel to the ground, and is mounted on the body of the power robot by means of the mounting support that is the camera holding platform.
The optical axis orientation of the binocular camera is set to be parallel to the Y axis of the coordinate system for the robot. The camera holding platform rotates about a fixed axis. Thus, the pitching angle of the optical axes of the camera relative to the ground is changed. The pitching angle and mounting height of the camera are determined according to parameters such as the focal length and the field angle range of the camera, the shortest shooting distance of the robot and the safe distance of the robot.
As shown in Fig. 2, (1) the binocular camera starts to acquire environmental image = information.
(2) Stereo rectification Distortion rectification and stereo rectification are carried out on binocular images acquired by the transformer substation robot while traveling according to the known intrinsic parameters of the binocular camera by use of a calculation method that may refer to lens distortion in page 410 and stereo rectification in page 467 of Learning OpenCV.
(3) Road detection based on inverse projection Inverse projection transformation is carried out on the rectified left ocular image and right ocular image, and the right ocular image and the left ocular image are projected to the world coordinate system for the ground plane, wherein the inverse projection transformation is achieved by calculation via the intrinsic parameters K of the camera and the extrinsic parameters R, T of the camera relative to a coordinate system for the ground plane.
Assuming the intrinsic parameters of the current monocular (left or right ocular) ft, s ul I 0 f, v camera to be k= 0 0 1 , a rotation matrix and a translation matrix of the camera relative to the world coordinate system to be R,õ, and T, space pixel coordinates in the image plane to be (u, v) and target coordinates on the ground plane under the world coordinate system to be (X, Y, Z), and given that the height of the optical centers of the camera relative to the ground plane is H and the pitching included angle for the optical centers of the camera relative to the ground plane is 0, the world coordinate system for the ground plane is defined as 02 and ?ground S
defined as an equation of the ground plane under the coordinate system for the power robot; next, the equation of the ground plane according to the real environment of the transformer substation is defined as Z=0, thereby reaching Pground = 0 0 0 Homogeneous coordinates of a projection relation between the image plane and the ground plane obtained by projection transformation of the camera are expressed as follows:
_ _ X
= P õ v Z groom z KER
1 _ The left ocular image is inversely projected to the world coordinate system for the ground plane to obtain the road edge information still kept in a parallel relation according to affine invariance. After the inverse projection matrix of the binocular camera is obtained, the left ocular image and the right ocular image are inversely projected to the world coordinate system on the ground plane to obtain images ImgLremap'ImageRremap. Difference calculation is then carried out on the images ImageLrernap. ImageRremap to obtain Imagedifterence' followed by filtering of overlap information of the images in the world coordinate system for the ground plane and extracting of region information with the road edges not overlapping the obstacle region. The IIough transformation detection straight line is used for setting up constraints according to a quadrant direction, a length and a position of the straight line, and a straight line equation of the road edges on both sides under the coordinate system for the camera is extracted. Calculation is performed according to a coordinate transformation relation of the camera relative to the coordinate system for the robot to obtain road information under the coordinate system for the robot. and the reference road information is provided to the path planning system for path planning.
The Hough transformation detection straight line is determined by voting by each edge point for all possible straight lines passing through it and then finding out the point corresponding to the highest accumulated value in the coordinate system space of the straight line equation as the most possible matching straight line. Let the straight line equation be y=kx+d. After a plurality of possible straight lines are calculated, two most fit straight line equations are obtained according to information such as the value of the slope direction K, the orientation and the length of each straight line.
(4) Obstacle detection hypothesis A pixel matching relation between the binocular images is calculated based on an SAD gray correlation between pixels according to the intrinsic and extrinsic parameters between the coordinates of the binocular camera, thereby obtaining the disparity image 'disparity.
Assuming a window size between an image 1(x.y) and an image J(x,y) to be the SAD
gray correlation of (w,w), the SAD correlation between a point (x, y) in the image 1(x,y) and a point (x', y') in the image J(x,y) is then obtained as follows:
----SAD(x,37) j =E fv_iagx + Cy +1) ¨j(x + iy +DI i je o:
Each pixel point (xr, y) at the same longitudinal coordinate is searched for in the right ocular image according to a pixel (xl, y) in the space of the left ocular image, and points having higher similarities are selected as candidate matching points by determining an SAD similarity between every two pixel points; then, the final matching relation is obtained according to sequential and unique constraints.
Subsequently, the histograms of the disparity image are calculated, followed by a Gaussian smoothing operation on the histograms. A method of calculating the disparity histograms is to accumulate the number of the same pixels among all pixels in an image to generate a one-dimensional array for recording a probability of occurrence of each gray value in a two-dimensional image.
The smoothing process is as follows:
1 -rx-xoi P(x) :z=-= ____________________ Vir¨rc72 z wherein P(x) is a value after filtering; x is a value of each unit of the histograms;
(o' o, is the mean and variance of the Gaussian function.
A straight line having a given slope k is used to detect a histogram interval above the straight line according to the smoothed histograms, thus obtaining an initial detection result for an obstacle. For example, the slope of the straight line is assumed to be 45 degrees.
A potential obstacle is detected by means of the histogram segmentation algorithm based on the disparity image in the road region. As the disparity histograms cannot reflect the spatial relation between pixels, the obstacle information seen according to wave crests may be not connected in the image space. Therefore, connected regions need to be extracted by means of the connected regions; moreover, whether the obstacle is the same one is judged according to the distances between the regions, and the obstacle is marked in the image space.
(5) Three-dimensional information recovery Given the matching relation of the left ocular image and the right ocular image and the intrinsic and extrinsic parameters of the binocular camera, a three-dimensional reconstruction equation Q is constructed and direct calculation is performed to obtain three-dimensional point coordinates:
-1 0 0 ¨uo tu-0 1 0 --vo Q =
0 0 = Q d 0 0 ¨ ¨Tx 010 ¨ 110f)Tx wherein u0 and vo are rectified horizontal and vertical coordinates of a main point of the reference camera; u0' is a horizontal coordinate of a main point of another ocular camera; Tx is a baseline distance between the two cameras; (X, Y, Z) is three-dimensional point coordinates under a three-dimensional coordinate system with a camera as an origin; (u, v, d) is image coordinate values and a corresponding disparity value.
After the three-dimensional points of the obstacle region in the image are obtained, planes to which three-dimensional points of connected regions belong are fitted according to the distribution of the three-dimensional points; a minimum enclosing rectangle is created and a centroid is calculated, thereby eventually determining an actual size and a specific position of the obstacle.
According to one aspect of the present invention, there is provided a binocular visual navigation system based on a power robot, comprising an image acquisition system that comprises a binocular camera which is connected to an image acquisition card by means of an image transmission wire and used to acquire environmental images of a road along which the power robot moves ahead and then upload via the image transmission wire the acquired images to the image acquisition card which then transmits the acquired environmental images of the road to a visual analysis system; the visual analysis system that achieves detection on obstacles within a road region of a transformer substation by means of the inverse projection theory and the three-dimensional reconstruction technology according to binocular image information acquired by the image acquisition system and information of intrinsic and extrinsic parameters of the camera, and transmits the information to a path planning system, wherein image parsing, distortion rectification and stereo rectification are carried out on the binocular image information to remove distortion and constrain pixel matching relations to a same X axis; the path panning system that builds a two-dimensional occupancy map according to the environmental information acquired from the image, plans a path and immediately adjusts a traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle; and a motion control system that controls the robot to move according to the path planned by the path planning system.
Brief Description of the Drawings Fig. I is a block diagram of a system of the present invention; and Fig. 2 is a flowchart of a system of the present invention.
1. Image acquisition system, 2. Visual analysis system, 3. Path planning system, 4. Motion control system, and 5. Body of robot.
Description of the Embodiments The present invention will be further illustrated below by combining the accompanying drawings with embodiments.
As shown in Fig. 1, an image acquisition system 1 is provided that comprises a binocular camera which is connected to an image acquisition card by means of an image transmission wire and used to acquire environmental images of a road along 9a which the power robot moves ahead and then upload via the image transmission wire the acquired images to the image acquisition card which then transmits the acquired environmental images of the road to a visual analysis system 2.
The visual analysis system 2 achieves detection on obstacles within a road region of a transformer substation by means of the inverse projection theory and the three-dimensional reconstruction technology according to binocular image information acquired by the image acquisition system 1 and information of intrinsic and extrinsic parameters of the camera, and transmits the information to a path planning system 3.
The path panning system 3 builds a two-dimensional occupancy map according to the environmental information acquired by the image acquisition system 1, plans a path and immediately adjusts the traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle.
A motion control system 4 controls the body of the robot 5 to move according to the path planned by the path planning system.
The binocular camera has two optical axes parallel to each other and a connecting line of two optical centers parallel to the ground, and is mounted on the body of the power robot by means of a mounting support that is a camera holding platform. An optical axis orientation of the binocular camera is set to be parallel to the Y axis of a coordinate system for the robot. The camera holding platform rotates about a fixed axis.
The traveling trajectory of a transformer substation inspection robot on an equipment space road is determined by means of path planning, and then the robot starts to travel.
While traveling, an onboard processor of the robot issues an instruction to turn on the binocular camera.
The binocular camera has two optical axes parallel to each other and the connecting line of two optical centers parallel to the ground, and is mounted on the body of the power robot by means of the mounting support that is the camera holding platform.
The optical axis orientation of the binocular camera is set to be parallel to the Y axis of the coordinate system for the robot. The camera holding platform rotates about a fixed axis. Thus, the pitching angle of the optical axes of the camera relative to the ground is changed. The pitching angle and mounting height of the camera are determined according to parameters such as the focal length and the field angle range of the camera, the shortest shooting distance of the robot and the safe distance of the robot.
As shown in Fig. 2, (1) the binocular camera starts to acquire environmental image = information.
(2) Stereo rectification Distortion rectification and stereo rectification are carried out on binocular images acquired by the transformer substation robot while traveling according to the known intrinsic parameters of the binocular camera by use of a calculation method that may refer to lens distortion in page 410 and stereo rectification in page 467 of Learning OpenCV.
(3) Road detection based on inverse projection Inverse projection transformation is carried out on the rectified left ocular image and right ocular image, and the right ocular image and the left ocular image are projected to the world coordinate system for the ground plane, wherein the inverse projection transformation is achieved by calculation via the intrinsic parameters K of the camera and the extrinsic parameters R, T of the camera relative to a coordinate system for the ground plane.
Assuming the intrinsic parameters of the current monocular (left or right ocular) ft, s ul I 0 f, v camera to be k= 0 0 1 , a rotation matrix and a translation matrix of the camera relative to the world coordinate system to be R,õ, and T, space pixel coordinates in the image plane to be (u, v) and target coordinates on the ground plane under the world coordinate system to be (X, Y, Z), and given that the height of the optical centers of the camera relative to the ground plane is H and the pitching included angle for the optical centers of the camera relative to the ground plane is 0, the world coordinate system for the ground plane is defined as 02 and ?ground S
defined as an equation of the ground plane under the coordinate system for the power robot; next, the equation of the ground plane according to the real environment of the transformer substation is defined as Z=0, thereby reaching Pground = 0 0 0 Homogeneous coordinates of a projection relation between the image plane and the ground plane obtained by projection transformation of the camera are expressed as follows:
_ _ X
= P õ v Z groom z KER
1 _ The left ocular image is inversely projected to the world coordinate system for the ground plane to obtain the road edge information still kept in a parallel relation according to affine invariance. After the inverse projection matrix of the binocular camera is obtained, the left ocular image and the right ocular image are inversely projected to the world coordinate system on the ground plane to obtain images ImgLremap'ImageRremap. Difference calculation is then carried out on the images ImageLrernap. ImageRremap to obtain Imagedifterence' followed by filtering of overlap information of the images in the world coordinate system for the ground plane and extracting of region information with the road edges not overlapping the obstacle region. The IIough transformation detection straight line is used for setting up constraints according to a quadrant direction, a length and a position of the straight line, and a straight line equation of the road edges on both sides under the coordinate system for the camera is extracted. Calculation is performed according to a coordinate transformation relation of the camera relative to the coordinate system for the robot to obtain road information under the coordinate system for the robot. and the reference road information is provided to the path planning system for path planning.
The Hough transformation detection straight line is determined by voting by each edge point for all possible straight lines passing through it and then finding out the point corresponding to the highest accumulated value in the coordinate system space of the straight line equation as the most possible matching straight line. Let the straight line equation be y=kx+d. After a plurality of possible straight lines are calculated, two most fit straight line equations are obtained according to information such as the value of the slope direction K, the orientation and the length of each straight line.
(4) Obstacle detection hypothesis A pixel matching relation between the binocular images is calculated based on an SAD gray correlation between pixels according to the intrinsic and extrinsic parameters between the coordinates of the binocular camera, thereby obtaining the disparity image 'disparity.
Assuming a window size between an image 1(x.y) and an image J(x,y) to be the SAD
gray correlation of (w,w), the SAD correlation between a point (x, y) in the image 1(x,y) and a point (x', y') in the image J(x,y) is then obtained as follows:
----SAD(x,37) j =E fv_iagx + Cy +1) ¨j(x + iy +DI i je o:
Each pixel point (xr, y) at the same longitudinal coordinate is searched for in the right ocular image according to a pixel (xl, y) in the space of the left ocular image, and points having higher similarities are selected as candidate matching points by determining an SAD similarity between every two pixel points; then, the final matching relation is obtained according to sequential and unique constraints.
Subsequently, the histograms of the disparity image are calculated, followed by a Gaussian smoothing operation on the histograms. A method of calculating the disparity histograms is to accumulate the number of the same pixels among all pixels in an image to generate a one-dimensional array for recording a probability of occurrence of each gray value in a two-dimensional image.
The smoothing process is as follows:
1 -rx-xoi P(x) :z=-= ____________________ Vir¨rc72 z wherein P(x) is a value after filtering; x is a value of each unit of the histograms;
(o' o, is the mean and variance of the Gaussian function.
A straight line having a given slope k is used to detect a histogram interval above the straight line according to the smoothed histograms, thus obtaining an initial detection result for an obstacle. For example, the slope of the straight line is assumed to be 45 degrees.
A potential obstacle is detected by means of the histogram segmentation algorithm based on the disparity image in the road region. As the disparity histograms cannot reflect the spatial relation between pixels, the obstacle information seen according to wave crests may be not connected in the image space. Therefore, connected regions need to be extracted by means of the connected regions; moreover, whether the obstacle is the same one is judged according to the distances between the regions, and the obstacle is marked in the image space.
(5) Three-dimensional information recovery Given the matching relation of the left ocular image and the right ocular image and the intrinsic and extrinsic parameters of the binocular camera, a three-dimensional reconstruction equation Q is constructed and direct calculation is performed to obtain three-dimensional point coordinates:
-1 0 0 ¨uo tu-0 1 0 --vo Q =
0 0 = Q d 0 0 ¨ ¨Tx 010 ¨ 110f)Tx wherein u0 and vo are rectified horizontal and vertical coordinates of a main point of the reference camera; u0' is a horizontal coordinate of a main point of another ocular camera; Tx is a baseline distance between the two cameras; (X, Y, Z) is three-dimensional point coordinates under a three-dimensional coordinate system with a camera as an origin; (u, v, d) is image coordinate values and a corresponding disparity value.
After the three-dimensional points of the obstacle region in the image are obtained, planes to which three-dimensional points of connected regions belong are fitted according to the distribution of the three-dimensional points; a minimum enclosing rectangle is created and a centroid is calculated, thereby eventually determining an actual size and a specific position of the obstacle.
(6) Gird map generation The three-dimensional point coordinates of the obstacle are mapped to the world coordinate system. A grid map within a certain range is built with a midpoint in the connecting line of the optical centers of the binocular camera as an origin, and grid filling is carried out according to the number of projections of the three-dimensional points on the ground plane.
(7) Path planning Path planning is carried out for the robot using the Markov path planning algorithm on the basis of the local two-dimensional occupancy map obtained in the step (6) and known global target points as well as the position of the robot in a global map obtained by a positioning system, thereby obtaining a new path.
(8) Robot control A speed that a traveling structure of the robot should execute and an angle of deviation are calculated according to the obtained new path in accordance with the current speed of the robot and the updating time of the path, and the information is transmitted by the control system to a traveling mechanism driver.
While the specific implementations of the present invention are described in conjunction with the accompanying drawings above, they are not limitations to the protection scope of the present invention. It should be understood by a person skilled in the art that various modifications or variations that a person skilled in the art can make without creative work on the basis of the technical solutions of the present invention still fall into the protection scope of the present invention.
While the specific implementations of the present invention are described in conjunction with the accompanying drawings above, they are not limitations to the protection scope of the present invention. It should be understood by a person skilled in the art that various modifications or variations that a person skilled in the art can make without creative work on the basis of the technical solutions of the present invention still fall into the protection scope of the present invention.
Claims (8)
1. A binocular visual navigation system based on a power robot, comprising an image acquisition system that comprises a binocular camera which is connected to an image acquisition card by means of an image transmission wire and used to acquire environmental images of a road along which the power robot moves ahead and then upload via the image transmission wire the acquired images to the image acquisition card which then transmits the acquired environmental images of the road to a visual analysis system;
the visual analysis system that achieves detection on obstacles within a road region of a transformer substation by means of the inverse projection theory and the three-dimensional reconstruction technology according to binocular image information acquired by the image acquisition system and information of intrinsic and extrinsic parameters of the camera, and transmits the information to a path planning system, wherein image parsing, distortion rectification and stereo rectification are carried out on the binocular image information to remove distortion and constrain pixel matching relations to a same X axis;
the path panning system that builds a two-dimensional occupancy map according to the environmental information acquired from the image, plans a path and immediately adjusts a traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle; and a motion control system that controls the robot to move according to the path planned by the path planning system.
the visual analysis system that achieves detection on obstacles within a road region of a transformer substation by means of the inverse projection theory and the three-dimensional reconstruction technology according to binocular image information acquired by the image acquisition system and information of intrinsic and extrinsic parameters of the camera, and transmits the information to a path planning system, wherein image parsing, distortion rectification and stereo rectification are carried out on the binocular image information to remove distortion and constrain pixel matching relations to a same X axis;
the path panning system that builds a two-dimensional occupancy map according to the environmental information acquired from the image, plans a path and immediately adjusts a traveling path of the robot when an obstacle appears to avoid collision of the robot with the obstacle; and a motion control system that controls the robot to move according to the path planned by the path planning system.
2. The binocular visual navigation system based on a power robot of claim 1, wherein the binocular camera has two optical axes parallel to each other and a connecting line of two optical centers parallel to ground, and is mounted on a body of the power robot by means of a mounting support that is a camera holding platform; an optical axis orientation of the binocular camera is set to be parallel to a Y axis of a coordinate system for the robot; the camera holding platform rotates about a fixed axis.
3. A
visual navigation method based on the binocular visual navigation system based on a power robot of claim 1, comprising the following specific steps:
step 1, acquiring binocular environmental images, and obtaining binocular images without distortion and with pixel matching relations constrained to a same X
axis via image parsing, distortion rectification and stereo rectification;
step 2, carrying out inverse projection transformation on the rectified images, projecting a left ocular image and a right ocular image to a ground plane, carrying out pixel subtraction on a re-projected left ocular image and right ocular image and carrying out Canny edge detection on a difference image, and then using a Hough straight line to detect a road region and road edges;
step 3, after determining the road region and the road edges, determining a matching relation of the left ocular image and the right ocular image according to a gray region correlation of regions in regions of interest of the images; then generating a disparity image according to the matching relation of the left ocular image and the right ocular image, calculating histograms of the disparity image, carrying out histogram segmentation on the disparity image and judging whether an obstacle is present in the disparity image; if so, going to step 4, otherwise, going back to the step 1;
step 4, determining three-dimensional information of the obstacle according to an obstacle region obtained by segmentation and camera calibration information, and determining a size and an average distance of the obstacle region according to the three-dimensional information of the obstacle; and step 5, transmitting the detected obstacle information to the robot control system, updating a map according to the new obstacle information, planning, by the path planning system, a next moving direction of the robot according to existing path information, and inputting, by the robot control system, a speed into a mobile platform driver according to a current traveling direction of the robot to allow the robot to move; if a next step of operation cannot be executed, stopping the robot and reporting a signal to an upper computer; otherwise, repeating the step 1.
visual navigation method based on the binocular visual navigation system based on a power robot of claim 1, comprising the following specific steps:
step 1, acquiring binocular environmental images, and obtaining binocular images without distortion and with pixel matching relations constrained to a same X
axis via image parsing, distortion rectification and stereo rectification;
step 2, carrying out inverse projection transformation on the rectified images, projecting a left ocular image and a right ocular image to a ground plane, carrying out pixel subtraction on a re-projected left ocular image and right ocular image and carrying out Canny edge detection on a difference image, and then using a Hough straight line to detect a road region and road edges;
step 3, after determining the road region and the road edges, determining a matching relation of the left ocular image and the right ocular image according to a gray region correlation of regions in regions of interest of the images; then generating a disparity image according to the matching relation of the left ocular image and the right ocular image, calculating histograms of the disparity image, carrying out histogram segmentation on the disparity image and judging whether an obstacle is present in the disparity image; if so, going to step 4, otherwise, going back to the step 1;
step 4, determining three-dimensional information of the obstacle according to an obstacle region obtained by segmentation and camera calibration information, and determining a size and an average distance of the obstacle region according to the three-dimensional information of the obstacle; and step 5, transmitting the detected obstacle information to the robot control system, updating a map according to the new obstacle information, planning, by the path planning system, a next moving direction of the robot according to existing path information, and inputting, by the robot control system, a speed into a mobile platform driver according to a current traveling direction of the robot to allow the robot to move; if a next step of operation cannot be executed, stopping the robot and reporting a signal to an upper computer; otherwise, repeating the step 1.
4. The visual navigation method of claim 3, wherein the step 1 comprises the following specific steps:
(1-1) acquiring, by the binocular camera, the environmental image information;
(1-2) carrying out distortion rectification and stereo rectification on the left ocular image and the right ocular image acquired during traveling of the power robot according to intrinsic parameters K1, K r of the binocular camera, relative position relations R, T of the binocular camera and distortion parameters (k1, k2, k3, p1, p2) obtained by calibration; and (1-3) carrying out inverse projection transformation on the rectified left ocular image and right ocular image, re-projecting the binocular images to the ground plane, wherein the inverse projection transformation is determined via the parameters k1, k2, R, T of the binocular camera and a rotation matrix R w and a translation matrix T w of a coordinate system for a reference camera relative to a world coordinate system for the ground plane;
assuming the intrinsic parameters of a current left or right ocular camera to be K=
with f u and f v being a horizontal focal length and a longitudinal focal length, u 0, v0 representing a main point position in an image plane and s being a pixel aspect ratio, a rotation matrix and a translation matrix of the current camera relative to the world coordinate system to be R w and T w, space pixel coordinates in the image plane to be (u, v), target coordinates on the ground plane under the world coordinate system to be (X, Y, Z), and given that a height of optical centers of the binocular camera relative to the ground plane is H and a pitching included angle for the optical centers of the binocular camera relative to the ground plane is 0, defining the coordinate system for the power robot as O2 and P
ground as an equation of the ground plane under the coordinate system for the robot, and defining the equation of the ground plane according to the real environment of the transformer substation as Z=0, then reaching obtaining a projection relation between the image plane and the ground plane according to a projection relation of the binocular camera, as represented by homogeneous coordinates as follows:
(1-1) acquiring, by the binocular camera, the environmental image information;
(1-2) carrying out distortion rectification and stereo rectification on the left ocular image and the right ocular image acquired during traveling of the power robot according to intrinsic parameters K1, K r of the binocular camera, relative position relations R, T of the binocular camera and distortion parameters (k1, k2, k3, p1, p2) obtained by calibration; and (1-3) carrying out inverse projection transformation on the rectified left ocular image and right ocular image, re-projecting the binocular images to the ground plane, wherein the inverse projection transformation is determined via the parameters k1, k2, R, T of the binocular camera and a rotation matrix R w and a translation matrix T w of a coordinate system for a reference camera relative to a world coordinate system for the ground plane;
assuming the intrinsic parameters of a current left or right ocular camera to be K=
with f u and f v being a horizontal focal length and a longitudinal focal length, u 0, v0 representing a main point position in an image plane and s being a pixel aspect ratio, a rotation matrix and a translation matrix of the current camera relative to the world coordinate system to be R w and T w, space pixel coordinates in the image plane to be (u, v), target coordinates on the ground plane under the world coordinate system to be (X, Y, Z), and given that a height of optical centers of the binocular camera relative to the ground plane is H and a pitching included angle for the optical centers of the binocular camera relative to the ground plane is 0, defining the coordinate system for the power robot as O2 and P
ground as an equation of the ground plane under the coordinate system for the robot, and defining the equation of the ground plane according to the real environment of the transformer substation as Z=0, then reaching obtaining a projection relation between the image plane and the ground plane according to a projection relation of the binocular camera, as represented by homogeneous coordinates as follows:
5. The visual navigation method of claim 3, wherein the step 2 comprises the following specific steps:
(2-1) inversely projecting the left ocular image and the right ocular image to a world coordinate system, and still obtaining road edge information kept in a parallel relation according to a mapping relation between the planes; and (2-2) after obtaining an inverse projection matrix of the binocular camera, inversely projecting the left ocular image and the right ocular image to the world coordinate system to obtain images ImgL remap and ImgR remap; then carrying out difference calculation on the images ImgL remap and ImgR remap to obtain Image difference; filtering overlap information of the images ImgL remap and ImgR remap in the world coordinate system, and extracting region information with the road edges not overlapping the obstacle region using a Canny edge detection algorithm; using the Hough transformation detection straight line for setting up constraints according to a quadrant direction, a length and a position of the straight line, and extracting a straight line equation of the road edges on both sides under the coordinate system for the camera; carrying out calculation according to a coordinate transformation relation of the camera relative to the coordinate system for the robot to obtain road information under the coordinate system for the robot, and providing reference road information to the path planning system for path planning.
(2-1) inversely projecting the left ocular image and the right ocular image to a world coordinate system, and still obtaining road edge information kept in a parallel relation according to a mapping relation between the planes; and (2-2) after obtaining an inverse projection matrix of the binocular camera, inversely projecting the left ocular image and the right ocular image to the world coordinate system to obtain images ImgL remap and ImgR remap; then carrying out difference calculation on the images ImgL remap and ImgR remap to obtain Image difference; filtering overlap information of the images ImgL remap and ImgR remap in the world coordinate system, and extracting region information with the road edges not overlapping the obstacle region using a Canny edge detection algorithm; using the Hough transformation detection straight line for setting up constraints according to a quadrant direction, a length and a position of the straight line, and extracting a straight line equation of the road edges on both sides under the coordinate system for the camera; carrying out calculation according to a coordinate transformation relation of the camera relative to the coordinate system for the robot to obtain road information under the coordinate system for the robot, and providing reference road information to the path planning system for path planning.
6. The visual navigation method of claim 3, wherein the step 3 comprises the following specific steps:
(3-1) calculating a pixel matching relation between the binocular images based on an SAD gray correlation between pixels according to the intrinsic and extrinsic parameters between the coordinates of the binocular camera, thereby obtaining the disparity image I disparity;
(3-2) assuming a window size between an image I(x,y) and an image J(x,y) to be the SAD gray correlation of (w,w), then obtaining the SAD correlation between a point (x, y) in the image I(x,y) and a point (x, y) in the image J(x,y) as follows:
searching for each pixel point (xr, y) at the same longitudinal coordinate in the right ocular image according to a pixel (xl, y) in the space of the left ocular image, selecting points having higher similarities as candidate matching points by determining an SAD
similarity between every two pixel points, and then obtaining the final matching relation according to sequential and unique constraints;
(3-4) then calculating the histograms of the disparity image, followed by a Gaussian smoothing operation on the histograms, wherein a method of calculating the disparity histograms is to accumulate the number of the same pixels among all pixels in an image to generate a one-dimensional array for recording a probability of occurrence of each gray value in a two-dimensional image;
the smoothing process is as follows:
wherein P(x) is a value after filtering; x is a value of each unit of the histograms;
(x0,.sigma.) is the mean and variance of the Gaussian function; and (3-5) using a straight line having a given slope to detect a histogram interval above the straight line according to the smoothed histograms, thus obtaining an initial detection result for an obstacle.
(3-1) calculating a pixel matching relation between the binocular images based on an SAD gray correlation between pixels according to the intrinsic and extrinsic parameters between the coordinates of the binocular camera, thereby obtaining the disparity image I disparity;
(3-2) assuming a window size between an image I(x,y) and an image J(x,y) to be the SAD gray correlation of (w,w), then obtaining the SAD correlation between a point (x, y) in the image I(x,y) and a point (x, y) in the image J(x,y) as follows:
searching for each pixel point (xr, y) at the same longitudinal coordinate in the right ocular image according to a pixel (xl, y) in the space of the left ocular image, selecting points having higher similarities as candidate matching points by determining an SAD
similarity between every two pixel points, and then obtaining the final matching relation according to sequential and unique constraints;
(3-4) then calculating the histograms of the disparity image, followed by a Gaussian smoothing operation on the histograms, wherein a method of calculating the disparity histograms is to accumulate the number of the same pixels among all pixels in an image to generate a one-dimensional array for recording a probability of occurrence of each gray value in a two-dimensional image;
the smoothing process is as follows:
wherein P(x) is a value after filtering; x is a value of each unit of the histograms;
(x0,.sigma.) is the mean and variance of the Gaussian function; and (3-5) using a straight line having a given slope to detect a histogram interval above the straight line according to the smoothed histograms, thus obtaining an initial detection result for an obstacle.
7. The visual navigation method of claim 3, wherein the step 4 comprises the following specific steps:
(4-1) after obtaining the matching relation of the obstacle region, obtaining three-dimensional point coordinates of the obstacle region according to the triangle location principle on the basis of the known intrinsic and extrinsic parameters of the binocular camera;
(4-2) constructing a three-dimensional reconstruction equation Q according to the matching relation between the left and right ocular images and the intrinsic and extrinsic parameters of the binocular camera, and obtaining the three-dimensional point coordinates of the obstacle region:
wherein u0 and v0 are rectified horizontal and vertical coordinates of a main point of the reference camera; u0' is a horizontal coordinate of a main point of another ocular camera;
T X is a baseline distance between the two cameras; (X, Y, Z) is three-dimensional point coordinates under a three-dimensional coordinate system with a camera as an origin; (u, v, d) is image coordinate values and a corresponding disparity value; and (4-3) after obtaining the three-dimensional points of the obstacle region in the image, fitting planes to which three-dimensional points of connected regions belong according to the distribution of the three-dimensional points of the obstacle, creating a minimum enclosing rectangle and calculating a centroid, thereby eventually determining an actual size and a specific position of the obstacle.
(4-1) after obtaining the matching relation of the obstacle region, obtaining three-dimensional point coordinates of the obstacle region according to the triangle location principle on the basis of the known intrinsic and extrinsic parameters of the binocular camera;
(4-2) constructing a three-dimensional reconstruction equation Q according to the matching relation between the left and right ocular images and the intrinsic and extrinsic parameters of the binocular camera, and obtaining the three-dimensional point coordinates of the obstacle region:
wherein u0 and v0 are rectified horizontal and vertical coordinates of a main point of the reference camera; u0' is a horizontal coordinate of a main point of another ocular camera;
T X is a baseline distance between the two cameras; (X, Y, Z) is three-dimensional point coordinates under a three-dimensional coordinate system with a camera as an origin; (u, v, d) is image coordinate values and a corresponding disparity value; and (4-3) after obtaining the three-dimensional points of the obstacle region in the image, fitting planes to which three-dimensional points of connected regions belong according to the distribution of the three-dimensional points of the obstacle, creating a minimum enclosing rectangle and calculating a centroid, thereby eventually determining an actual size and a specific position of the obstacle.
8. The visual navigation method of claim 3, wherein the step 5 comprises the following specific steps:
(5-1) mapping the three-dimensional point coordinates of the obstacle to the world coordinate system, building a grid map within a certain range with a midpoint in the connecting line of the optical centers of the binocular camera as an origin, and carrying out grid filling according to the number of projections of the three-dimensional points on the ground plane;
(5-2) carrying out path planning for the power robot using the Markov path planning algorithm according to the local two-dimensional occupancy map and known global target points as well as the position of the power robot in a global map, thereby obtaining an obstacle-avoided path; and (5-3) planning, by the path planning system, a next moving direction for the robot, and inputting, by the robot control system, a speed into the mobile platform driver according to the current traveling direction of the robot to allow the robot to move.
(5-1) mapping the three-dimensional point coordinates of the obstacle to the world coordinate system, building a grid map within a certain range with a midpoint in the connecting line of the optical centers of the binocular camera as an origin, and carrying out grid filling according to the number of projections of the three-dimensional points on the ground plane;
(5-2) carrying out path planning for the power robot using the Markov path planning algorithm according to the local two-dimensional occupancy map and known global target points as well as the position of the power robot in a global map, thereby obtaining an obstacle-avoided path; and (5-3) planning, by the path planning system, a next moving direction for the robot, and inputting, by the robot control system, a speed into the mobile platform driver according to the current traveling direction of the robot to allow the robot to move.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201320506592.5U CN203386241U (en) | 2013-08-19 | 2013-08-19 | Electric power robot-based binocular vision navigation system |
| CN201320506592.5 | 2013-08-19 | ||
| CN201310362290.XA CN103413313B (en) | 2013-08-19 | 2013-08-19 | The binocular vision navigation system of electrically-based robot and method |
| CN201310362290.X | 2013-08-19 | ||
| PCT/CN2014/079912 WO2015024407A1 (en) | 2013-08-19 | 2014-06-16 | Power robot based binocular vision navigation system and method based on |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CA2950791A1 CA2950791A1 (en) | 2015-02-26 |
| CA2950791C true CA2950791C (en) | 2019-04-16 |
Family
ID=52483033
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CA2950791A Active CA2950791C (en) | 2013-08-19 | 2014-06-16 | Binocular visual navigation system and method based on power robot |
Country Status (2)
| Country | Link |
|---|---|
| CA (1) | CA2950791C (en) |
| WO (1) | WO2015024407A1 (en) |
Families Citing this family (238)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105974938B (en) * | 2016-06-16 | 2023-10-03 | 零度智控(北京)智能科技有限公司 | Obstacle avoidance methods, devices, carriers and drones |
| CN106228564B (en) * | 2016-07-29 | 2023-04-07 | 国网河南省电力公司郑州供电公司 | External parameter two-step combined online calibration method and system of multi-view camera |
| CN106548486B (en) * | 2016-11-01 | 2024-02-27 | 浙江大学 | A method for position tracking of unmanned vehicles based on sparse visual feature maps |
| CN106323294B (en) * | 2016-11-04 | 2023-06-09 | 新疆大学 | Positioning method and positioning device for substation inspection robot |
| CN106708036A (en) * | 2016-11-30 | 2017-05-24 | 福建农林大学 | Path navigation apparatus based on embedded spray robot, and realization method thereof |
| CN106873593B (en) * | 2017-03-07 | 2019-06-07 | 浙江大学 | A kind of intelligent ball collecting robot based on OpenCV image recognition algorithm |
| CN107153382B (en) * | 2017-06-16 | 2023-03-21 | 华南理工大学 | Flexible hinged plate vibration control device and method based on binocular vision measurement |
| CN109933092B (en) * | 2017-12-18 | 2022-07-05 | 北京京东乾石科技有限公司 | Aircraft obstacle avoidance method, device, readable storage medium and aircraft |
| CN108394814A (en) * | 2018-02-05 | 2018-08-14 | 上海振华重工(集团)股份有限公司 | Gantry crane cart based on image recognition guides system and method |
| CN108994820A (en) * | 2018-07-27 | 2018-12-14 | 国网江苏省电力有限公司徐州供电分公司 | Robot system and working scene construction method for livewire work |
| CN109118585B (en) * | 2018-08-01 | 2023-02-10 | 武汉理工大学 | Virtual compound eye camera system meeting space-time consistency for building three-dimensional scene acquisition and working method thereof |
| CN109046846A (en) * | 2018-10-30 | 2018-12-21 | 石家庄辐科电子科技有限公司 | A kind of intelligent circuit board paint spraying apparatus based on linear motor |
| CN109264585A (en) * | 2018-10-31 | 2019-01-25 | 郑州桔槔智能科技有限公司 | Tower crane Unmanned Systems |
| CN109583327A (en) * | 2018-11-13 | 2019-04-05 | 青岛理工大学 | Binocular vision wheat seedling path fitting method |
| CN111220156B (en) * | 2018-11-25 | 2023-06-23 | 星际空间(天津)科技发展有限公司 | Navigation method based on city live-action |
| CN109859271B (en) * | 2018-12-14 | 2022-09-27 | 哈尔滨工程大学 | A joint calibration method of underwater camera and forward-looking sonar |
| CN112074868B (en) * | 2018-12-29 | 2025-12-12 | 河南埃尔森智能科技有限公司 | Structured light-based industrial robot positioning methods, devices, controllers, and media |
| CN109801309B (en) * | 2019-01-07 | 2023-06-20 | 华南理工大学 | Obstacle sensing method based on RGB-D camera |
| CN109859277A (en) * | 2019-01-21 | 2019-06-07 | 陕西科技大学 | A kind of robotic vision system scaling method based on Halcon |
| CN109785317B (en) * | 2019-01-23 | 2022-11-01 | 辽宁工业大学 | Automatic pile up neatly truss robot's vision system |
| CN109596078A (en) * | 2019-01-28 | 2019-04-09 | 吉林大学 | Multi-information fusion spectrum of road surface roughness real-time testing system and test method |
| CN109919026B (en) * | 2019-01-30 | 2023-06-30 | 华南理工大学 | A Local Path Planning Method for Unmanned Surface Vehicle |
| CN110163921B (en) * | 2019-02-15 | 2023-11-14 | 苏州巨能图像检测技术有限公司 | Automatic calibration method based on lamination machine vision system |
| CN109917670B (en) * | 2019-03-08 | 2022-10-21 | 北京精密机电控制设备研究所 | A Simultaneous Localization and Mapping Method for Intelligent Robot Clusters |
| CN110148169B (en) * | 2019-03-19 | 2022-09-27 | 长安大学 | Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera |
| CN111768448A (en) * | 2019-03-30 | 2020-10-13 | 北京伟景智能科技有限公司 | A method of spatial coordinate system calibration based on multi-camera detection |
| CN111783502B (en) * | 2019-04-03 | 2025-01-28 | 希迪智驾(湖南)股份有限公司 | Visual information fusion processing method, device and storage medium based on vehicle-road collaboration |
| CN110033465B (en) * | 2019-04-18 | 2023-04-25 | 天津工业大学 | Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image |
| CN110000793B (en) * | 2019-04-29 | 2024-07-16 | 武汉库柏特科技有限公司 | Robot motion control method and device, storage medium and robot |
| CN110147748B (en) * | 2019-05-10 | 2022-09-30 | 安徽工程大学 | Mobile robot obstacle identification method based on road edge detection |
| CN110223350A (en) * | 2019-05-23 | 2019-09-10 | 汕头大学 | A kind of building blocks automatic sorting method and system based on binocular vision |
| CN111829434B (en) * | 2019-05-28 | 2023-01-10 | 北京伟景智能科技有限公司 | Material flow metering detection method and system |
| CN112036210B (en) * | 2019-06-03 | 2024-03-08 | 杭州海康机器人股份有限公司 | Method and device for detecting obstacle, storage medium and mobile robot |
| CN110119152A (en) * | 2019-06-15 | 2019-08-13 | 大连亿斯德环境科技有限公司 | A kind of multifunctional intellectual wheelchair control system and corresponding control method |
| CN112212852B (en) * | 2019-07-12 | 2024-06-21 | 浙江未来精灵人工智能科技有限公司 | Positioning method, mobile device and storage medium |
| CN110262517B (en) * | 2019-07-18 | 2022-05-10 | 石家庄辰宙智能装备有限公司 | Trajectory tracking control method of AGV (automatic guided vehicle) system |
| CN110441781A (en) * | 2019-08-14 | 2019-11-12 | 大连海事大学 | Reversing radar image system based on information fusion |
| CN110568846A (en) * | 2019-08-28 | 2019-12-13 | 佛山市兴颂机器人科技有限公司 | A kind of intelligent navigation method and system of AGV |
| CN110543859B (en) * | 2019-09-05 | 2023-08-18 | 大连海事大学 | Sea cucumber autonomous identification and grabbing method based on deep learning and binocular positioning |
| WO2021056139A1 (en) * | 2019-09-23 | 2021-04-01 | 深圳市大疆创新科技有限公司 | Method and device for acquiring landing position, unmanned aerial vehicle, system, and storage medium |
| CN110618682B (en) * | 2019-09-24 | 2022-11-01 | 河海大学常州校区 | Centralized control type football robot color code structure and identification method thereof |
| CN110706333B (en) * | 2019-09-25 | 2023-01-24 | 汕头大学 | A Reconstruction Method Based on Manual Calibration of Pipeline Location and Leakage Points |
| CN110543177A (en) * | 2019-09-27 | 2019-12-06 | 珠海市一微半导体有限公司 | An autonomous baby-walking robot and automatic baby-walking method |
| CN112710308B (en) * | 2019-10-25 | 2024-05-31 | 阿里巴巴集团控股有限公司 | Robot positioning method, device and system |
| CN110774283A (en) * | 2019-10-29 | 2020-02-11 | 龙岩学院 | A computer vision-based robot walking control system and method |
| CN111047636B (en) * | 2019-10-29 | 2024-04-09 | 轻客小觅机器人科技(成都)有限公司 | Obstacle avoidance system and method based on active infrared binocular vision |
| CN112154394A (en) * | 2019-10-31 | 2020-12-29 | 深圳市大疆创新科技有限公司 | Terrain detection method, movable platform, control device, system and storage medium |
| CN110969158B (en) * | 2019-11-06 | 2023-07-25 | 中国科学院自动化研究所 | Target detection method, system and device based on robot vision for underwater operation |
| CN111127560B (en) * | 2019-11-11 | 2022-05-03 | 江苏濠汉信息技术有限公司 | Calibration method and system for three-dimensional reconstruction binocular vision system |
| CN110991708A (en) * | 2019-11-15 | 2020-04-10 | 云南电网有限责任公司电力科学研究院 | A system and method for equipment distribution based on path judgment |
| CN110991277B (en) * | 2019-11-20 | 2023-09-22 | 湖南检信智能科技有限公司 | Multi-dimensional multi-task learning evaluation system based on deep learning |
| CN111123911B (en) * | 2019-11-22 | 2023-03-24 | 北京空间飞行器总体设计部 | Legged intelligent star catalogue detection robot sensing system and working method thereof |
| CN110991360B (en) * | 2019-12-06 | 2023-07-04 | 合肥科大智能机器人技术有限公司 | Robot inspection point position intelligent configuration method based on visual algorithm |
| CN111091086B (en) * | 2019-12-11 | 2023-04-25 | 安徽理工大学 | A method of using machine vision technology to improve the recognition rate of logistics bill feature information |
| CN110991387B (en) * | 2019-12-11 | 2024-02-02 | 西安安森智能仪器股份有限公司 | A distributed processing method and system for image recognition of robot clusters |
| CN111060091B (en) * | 2019-12-13 | 2023-09-01 | 西安航空职业技术学院 | Robot navigation system |
| CN111179300A (en) * | 2019-12-16 | 2020-05-19 | 新奇点企业管理集团有限公司 | Method, apparatus, system, device and storage medium for obstacle detection |
| CN111243017B (en) * | 2019-12-24 | 2024-05-10 | 广州中国科学院先进技术研究所 | Intelligent robot grabbing method based on 3D vision |
| CN111060074A (en) * | 2019-12-25 | 2020-04-24 | 深圳壹账通智能科技有限公司 | Navigation method, device, computer equipment and medium based on computer vision |
| CN111028231B (en) * | 2019-12-27 | 2023-06-30 | 易思维(杭州)科技有限公司 | Workpiece position acquisition system based on ARM and FPGA |
| CN111462171A (en) * | 2020-01-10 | 2020-07-28 | 北京伟景智能科技有限公司 | Mark point detection tracking method |
| CN111292360A (en) * | 2020-01-21 | 2020-06-16 | 四川省交通勘察设计研究院有限公司 | Method and system for recommending ship driving route |
| CN111444763B (en) * | 2020-02-24 | 2023-07-18 | 珠海格力电器股份有限公司 | Security control method and device, storage medium and air conditioner |
| CN111251301B (en) * | 2020-02-27 | 2022-09-16 | 云南电网有限责任公司电力科学研究院 | Motion planning method for operation arm of power transmission line maintenance robot |
| CN111338347B (en) * | 2020-03-05 | 2023-08-25 | 大连海事大学 | A finite-time continuous control method for surface vehicles based on monocular vision |
| CN111324126B (en) * | 2020-03-12 | 2022-07-05 | 集美大学 | Vision unmanned ship |
| CN111563878B (en) * | 2020-03-27 | 2023-04-11 | 中国科学院西安光学精密机械研究所 | A Method of Spatial Target Positioning |
| CN113448340B (en) * | 2020-03-27 | 2022-12-16 | 北京三快在线科技有限公司 | Unmanned aerial vehicle path planning method and device, unmanned aerial vehicle and storage medium |
| CN111462241B (en) * | 2020-04-08 | 2023-03-28 | 北京理工大学 | Target positioning method based on monocular vision |
| CN113538477B (en) * | 2020-04-14 | 2023-08-29 | 北京达佳互联信息技术有限公司 | Method and device for acquiring plane pose, electronic equipment and storage medium |
| CN111429571B (en) * | 2020-04-15 | 2023-04-07 | 四川大学 | Rapid stereo matching method based on spatio-temporal image information joint correlation |
| CN111596654B (en) * | 2020-04-17 | 2023-07-11 | 国网湖南省电力有限公司 | Cable trench robot navigation obstacle avoidance method based on improved D star path planning algorithm |
| CN111681283B (en) * | 2020-05-11 | 2023-04-07 | 哈尔滨工业大学 | Monocular stereoscopic vision-based relative pose calculation method applied to wireless charging alignment |
| CN111598945B (en) * | 2020-05-18 | 2023-07-18 | 湘潭大学 | A three-dimensional positioning method for the crankshaft cover of an automobile engine |
| CN111652118B (en) * | 2020-05-29 | 2023-06-20 | 大连海事大学 | Marine product autonomous grabbing and guiding method based on underwater target neighbor distribution |
| CN111583346A (en) * | 2020-07-06 | 2020-08-25 | 深圳市瑞立视多媒体科技有限公司 | Camera Calibration System Based on Robot Scanning Field |
| CN111754577B (en) * | 2020-07-10 | 2023-07-11 | 南京艾格慧元农业科技有限公司 | Target recognition system and tractor reversing and farm tool connecting method based on same |
| CN112017236B (en) * | 2020-07-13 | 2023-10-31 | 魔门塔(苏州)科技有限公司 | A method and device for calculating the position of a target based on a monocular camera |
| CN111833333B (en) * | 2020-07-16 | 2023-10-03 | 西安科技大学 | A method and system for position and orientation measurement of cantilever excavation equipment based on binocular vision |
| CN111862193A (en) * | 2020-07-21 | 2020-10-30 | 太仓光电技术研究所 | A method and device for binocular vision positioning of electric welding spot based on shape descriptor |
| CN111857143A (en) * | 2020-07-23 | 2020-10-30 | 北京以萨技术股份有限公司 | Robot path planning method, system, terminal and medium based on machine vision |
| WO2022027611A1 (en) * | 2020-08-07 | 2022-02-10 | 苏州珊口智能科技有限公司 | Positioning method and map construction method for mobile robot, and mobile robot |
| CN111982300B (en) * | 2020-08-20 | 2024-01-23 | 湖北林青测控科技有限公司 | Regional dangerous target thermal value positioning and acquisition system and device |
| CN111998853B (en) * | 2020-08-27 | 2024-11-26 | 西安达升科技股份有限公司 | A AGV visual navigation method and system |
| CN112050814A (en) * | 2020-08-28 | 2020-12-08 | 国网智能科技股份有限公司 | Unmanned aerial vehicle visual navigation system and method for indoor transformer substation |
| CN112363494B (en) * | 2020-09-24 | 2024-09-20 | 深圳优地科技有限公司 | Robot forward path planning method, device and storage medium |
| CN112461227B (en) * | 2020-10-22 | 2023-07-21 | 新兴际华集团有限公司 | Wheel type chassis robot inspection intelligent autonomous navigation method |
| CN112396611B (en) * | 2020-10-27 | 2024-02-13 | 武汉理工大学 | A point-line visual odometry adaptive optimization method, device and storage medium |
| CN112330808B (en) * | 2020-10-30 | 2024-04-02 | 珠海一微半导体股份有限公司 | Optimization method based on local map and visual robot |
| CN112541951A (en) * | 2020-11-13 | 2021-03-23 | 国网浙江省电力有限公司舟山供电公司 | Monitoring system and monitoring method for preventing ship from hooking off cross-sea overhead power line |
| CN112379605B (en) * | 2020-11-24 | 2023-03-28 | 中国人民解放军火箭军工程大学 | Bridge crane semi-physical simulation control experiment system and method based on visual servo |
| CN112308033B (en) * | 2020-11-25 | 2024-04-05 | 珠海一微半导体股份有限公司 | Obstacle collision warning method based on depth data and visual chip |
| CN112270311B (en) * | 2020-11-25 | 2023-12-19 | 武汉理工大学 | A fast detection method and system for near targets based on vehicle-mounted surround back projection |
| CN112587378B (en) * | 2020-12-11 | 2022-06-07 | 中国科学院深圳先进技术研究院 | Exoskeleton robot footprint planning system and method based on vision and storage medium |
| CN114619443B (en) * | 2020-12-14 | 2023-07-21 | 苏州大学 | Robot Active Safety System |
| CN112720469B (en) * | 2020-12-18 | 2022-09-09 | 北京工业大学 | Microscopic Stereo Vision for Zero Point Calibration of Three-axis Translation Motion System |
| CN112731925B (en) * | 2020-12-21 | 2024-03-15 | 浙江科技学院 | Cone identification, path planning and control method for driverless formula racing |
| CN113687648A (en) * | 2020-12-24 | 2021-11-23 | 武汉科技大学 | Multifunctional campus epidemic prevention robot |
| CN112509065B (en) * | 2020-12-28 | 2024-05-28 | 中国科学院合肥物质科学研究院 | A visual guidance method for deep-sea robotic arm operations |
| CN112977764B (en) * | 2020-12-30 | 2024-09-10 | 核动力运行研究所 | Underwater robot course control system and method based on vision |
| CN112700470B (en) * | 2020-12-30 | 2023-12-08 | 上海智能交通有限公司 | A method of target detection and trajectory extraction based on traffic video streams |
| CN112657176A (en) * | 2020-12-31 | 2021-04-16 | 华南理工大学 | Binocular projection man-machine interaction method combined with portrait behavior information |
| CN112859860B (en) * | 2021-01-13 | 2024-09-27 | 宁波工业互联网研究院有限公司 | Robot system and path planning method thereof |
| CN112819943B (en) * | 2021-01-15 | 2022-08-30 | 北京航空航天大学 | Active vision SLAM system based on panoramic camera |
| CN112712534B (en) * | 2021-01-15 | 2023-05-26 | 山东理工大学 | Corn rhizome navigation datum line extraction method based on navigation trend line |
| CN112907973B (en) * | 2021-01-19 | 2023-04-25 | 四川星盾科技股份有限公司 | High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes |
| CN112801966B (en) * | 2021-01-21 | 2024-03-15 | 北京科技大学设计研究院有限公司 | Online detection method for deviation of hot rolled strip steel |
| CN112934541B (en) * | 2021-01-25 | 2022-08-09 | 济南蓝图士智能技术有限公司 | Automatic spraying device and method based on visual 3D reconstruction |
| CN113067847B (en) * | 2021-02-02 | 2022-07-12 | 绍兴晨璞网络科技有限公司 | Design method of matching type ultra-wideband positioning system architecture |
| CN114911221B (en) * | 2021-02-09 | 2023-11-28 | 北京小米机器人技术有限公司 | Robot control method, device and robot |
| CN112907609A (en) * | 2021-03-08 | 2021-06-04 | 中新国际联合研究院 | Method and device for automatically collecting building plastering progress information |
| CN113034671B (en) * | 2021-03-23 | 2024-01-09 | 成都航空职业技术学院 | Traffic sign three-dimensional reconstruction method based on binocular vision |
| CN113012236B (en) * | 2021-03-31 | 2022-06-07 | 武汉理工大学 | Intelligent robot polishing method based on crossed binocular vision guidance |
| CN112975361B (en) * | 2021-04-06 | 2025-09-26 | 南京航空航天大学苏州研究院 | A high-precision docking method for laser vision fusion in complex lighting environments |
| CN113112543B (en) * | 2021-04-08 | 2024-11-05 | 东方电气集团科学技术研究院有限公司 | A large-field-of-view two-dimensional real-time positioning system for visual moving targets |
| CN113269837B (en) * | 2021-04-27 | 2023-08-18 | 西安交通大学 | Positioning navigation method suitable for complex three-dimensional environment |
| CN113140006B (en) * | 2021-04-30 | 2023-01-20 | 中德(珠海)人工智能研究院有限公司 | Control method and system of self-balancing robot and storage medium |
| CN113222965B (en) * | 2021-05-27 | 2023-12-29 | 西安交通大学 | Three-dimensional observation method of discharge channel |
| CN113190047B (en) * | 2021-05-28 | 2023-09-05 | 广东工业大学 | A two-dimensional plane-based path recognition method for UAV swarms |
| CN113505646B (en) * | 2021-06-10 | 2024-04-12 | 清华大学 | Target searching method based on semantic map |
| CN113566830B (en) * | 2021-07-20 | 2023-09-26 | 常州大学 | Outdoor high-precision autonomous navigation device and method for wheel-foot composite robot |
| CN113467468B (en) * | 2021-07-23 | 2024-03-29 | 合肥工业大学 | Intelligent robot obstacle avoidance system and method based on embedded robot |
| CN113658221B (en) * | 2021-07-28 | 2024-04-26 | 同济大学 | AGV pedestrian following method based on monocular camera |
| CN113781550B (en) * | 2021-08-10 | 2024-10-29 | 国网河北省电力有限公司保定供电分公司 | A quadruped robot positioning method and system |
| CN113762544A (en) * | 2021-08-26 | 2021-12-07 | 深圳证券通信有限公司 | Intelligent machine room equipment position inspection and management method based on computer vision |
| CN113643280B (en) * | 2021-08-30 | 2023-09-22 | 燕山大学 | A plate sorting system and method based on computer vision |
| CN113703462B (en) * | 2021-09-02 | 2023-06-16 | 东北大学 | Unknown space autonomous exploration system based on quadruped robot |
| CN114332345B (en) * | 2021-09-23 | 2023-06-20 | 北京科技大学 | A method and system for local 3D reconstruction of metallurgical reservoir area based on binocular vision |
| CN113838147B (en) * | 2021-09-29 | 2024-01-19 | 上海海事大学 | Blade assembly visual guiding method and system based on depth camera |
| CN114037605B (en) * | 2021-09-29 | 2025-04-08 | 北京控制工程研究所 | A remote path planning method for patrollers combined with original images |
| CN113848925A (en) * | 2021-09-30 | 2021-12-28 | 天津大学 | SLAM-based unmanned rolling dynamic path autonomous planning method |
| CN113822946B (en) * | 2021-10-09 | 2023-10-20 | 上海第二工业大学 | Mechanical arm grabbing method based on computer vision |
| CN113753530B (en) * | 2021-10-12 | 2024-04-12 | 华南农业大学 | Machine vision tea branch citrus gesture recognition and automatic adjustment device |
| CN113960921B (en) * | 2021-10-19 | 2023-11-28 | 华南农业大学 | Visual navigation control method and system for orchard tracked vehicle |
| CN114037757B (en) * | 2021-10-19 | 2024-06-25 | 中国矿业大学(北京) | Binocular camera posture perception system based on synchronized images |
| CN114019963B (en) * | 2021-10-27 | 2023-06-30 | 西北工业大学 | External positioning system for desktop cluster robot |
| CN114050649A (en) * | 2021-11-12 | 2022-02-15 | 国网山东省电力公司临朐县供电公司 | Transformer substation inspection system and inspection method thereof |
| CN114092647B (en) * | 2021-11-19 | 2025-02-28 | 复旦大学 | A 3D reconstruction system and method based on panoramic binocular stereo vision |
| CN114035598B (en) * | 2021-11-22 | 2023-11-24 | 青岛理工大学 | Visual swing angle detection and swing reduction method for multi-rotor suspension system |
| CN114187246B (en) * | 2021-11-29 | 2025-01-24 | 哈尔滨工程大学 | A method for measuring focal length of laser marking machine |
| CN114241441B (en) * | 2021-12-03 | 2024-03-29 | 北京工业大学 | Dynamic obstacle detection method based on feature points |
| CN114170246B (en) * | 2021-12-08 | 2024-05-17 | 广东奥普特科技股份有限公司 | Positioning method for precision displacement platform |
| CN114397894B (en) * | 2021-12-29 | 2024-06-14 | 杭州电子科技大学 | A target search method for mobile robots that mimics human memory |
| CN114332935B (en) * | 2021-12-29 | 2024-11-26 | 长春理工大学 | A pedestrian detection method for AGV |
| CN114442615A (en) * | 2021-12-31 | 2022-05-06 | 重庆特斯联智慧科技股份有限公司 | Robot traveling strategy determination method and system based on barrier attributes |
| CN114371710B (en) * | 2022-01-07 | 2024-04-30 | 牧原肉食品有限公司 | Navigation method, device and readable storage medium of mobile robot based on reflective column |
| CN114322279B (en) * | 2022-01-07 | 2023-05-30 | 深圳市丰用实业集团有限公司 | Household air intelligent purification system and method |
| CN114359365B (en) * | 2022-01-11 | 2024-02-20 | 合肥工业大学 | A convergent binocular vision measurement method with high resolution |
| CN114494169B (en) * | 2022-01-18 | 2024-11-08 | 南京邮电大学 | Industrial flexible object detection method based on machine vision |
| CN114661044B (en) * | 2022-01-20 | 2024-08-27 | 河南科技大学 | A control method for household security intelligent car system |
| CN114266326B (en) * | 2022-01-21 | 2022-09-02 | 北京微链道爱科技有限公司 | Object identification method based on robot binocular three-dimensional vision |
| CN114549638B (en) * | 2022-01-24 | 2024-09-20 | 湖北文理学院 | Automatic pipeline centering method, system and test device |
| CN114581617B (en) * | 2022-02-24 | 2025-02-18 | 中航华东光电(上海)有限公司 | Multi-floor map construction method and system by integrating Z-axis data with QR code |
| CN114648575A (en) * | 2022-04-01 | 2022-06-21 | 合肥学院 | A Binocular Vision Detection Method and System for Track Slope Displacement Based on ORB Algorithm |
| CN114724053B (en) * | 2022-04-11 | 2024-02-20 | 合肥工业大学 | An outdoor visually impaired assistance method based on deep intelligent interaction |
| CN114740854A (en) * | 2022-04-11 | 2022-07-12 | 北京京东乾石科技有限公司 | Robot obstacle avoidance control method and device |
| CN114462726B (en) * | 2022-04-14 | 2022-07-12 | 青岛海滨风景区小鱼山管理服务中心 | Intelligent garden management method and system |
| CN114859376A (en) * | 2022-04-21 | 2022-08-05 | 河南省吉立达机器人有限公司 | Mobile robot floor judgment method based on image recognition |
| CN115049949B (en) * | 2022-04-29 | 2024-09-24 | 哈尔滨工程大学 | A method of object expression based on binocular vision |
| CN114897827B (en) * | 2022-05-10 | 2024-03-19 | 河南中烟工业有限责任公司 | Tobacco leaf packaging box status detection method based on machine vision |
| CN114905512B (en) * | 2022-05-16 | 2024-05-14 | 安徽元古纪智能科技有限公司 | Panoramic tracking and obstacle avoidance method and system for intelligent inspection robot |
| CN115235335B (en) * | 2022-05-24 | 2024-12-17 | 武汉工程大学 | Intelligent detection method for running part size of high-speed rail motor train unit |
| CN114862969A (en) * | 2022-05-27 | 2022-08-05 | 国网江苏省电力有限公司电力科学研究院 | Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot |
| CN115963815A (en) * | 2022-05-27 | 2023-04-14 | 江苏博人智能机器人有限公司 | Sanitation robot path tracking control system and method based on SLAM |
| CN115082548B (en) * | 2022-06-15 | 2025-01-24 | 西安科技大学 | Anchor drilling automatic positioning system and method based on binocular vision |
| CN115056225A (en) * | 2022-06-23 | 2022-09-16 | 成都盛锴科技有限公司 | Automatic obstacle avoidance method and device for mechanical arm |
| CN114954863B (en) * | 2022-07-05 | 2024-09-03 | 中国农业大学 | Autonomous patrol early warning bionic robot dolphin system and control method |
| CN115330684B (en) * | 2022-07-13 | 2025-08-15 | 河海大学 | Underwater structure apparent defect detection method based on binocular vision and line structured light |
| CN115291602A (en) * | 2022-07-13 | 2022-11-04 | 杭州海康机器人股份有限公司 | A method, device, electronic device and storage medium for determining the rotation direction of an AGV |
| CN115175105A (en) * | 2022-07-20 | 2022-10-11 | 苏州微创畅行机器人有限公司 | Positioning method and device and automatic positioning device |
| CN115294036A (en) * | 2022-07-21 | 2022-11-04 | 淮阴工学院 | Agaricus bisporus detection method based on shielding condition |
| CN115026683B (en) * | 2022-08-09 | 2022-10-25 | 湖南大学 | Aviation blade grinding and polishing device based on multi-robot cooperation and control method |
| CN115375647A (en) * | 2022-08-18 | 2022-11-22 | 北京航空航天大学 | Intelligent detection system and method for bolt defects of aero-engine based on key point detection |
| CN115096329B (en) * | 2022-08-25 | 2022-11-08 | 燚璞锐科技(江苏)有限公司 | Visual navigation control system and method for engineering road roller |
| CN115309164B (en) * | 2022-08-26 | 2023-06-27 | 苏州大学 | Man-machine co-fusion mobile robot path planning method based on generation of countermeasure network |
| CN115367626B (en) * | 2022-08-30 | 2025-07-25 | 陕煤集团神木柠条塔矿业有限公司 | Autonomous control system and control method of mining pipe grabbing machine |
| CN115546280A (en) * | 2022-09-27 | 2022-12-30 | 大连海事大学 | Multi-camera ship height measurement method based on dynamic long baseline |
| CN115775225A (en) * | 2022-09-27 | 2023-03-10 | 国网江苏省电力有限公司徐州供电分公司 | Detection method for power transmission line tower settlement by airborne AI intelligent shooting technology |
| CN115646950B (en) * | 2022-10-28 | 2025-07-11 | 长江慧控科技(武汉)有限公司 | H-shaped steel structure cleaning method, device, equipment and storage medium |
| CN115810053A (en) * | 2022-11-14 | 2023-03-17 | 广东电网有限责任公司 | A Method and System for Determining the Scale Relationship of Binocular Camera Parameters Based on Experiments |
| CN115830530B (en) * | 2022-11-22 | 2025-09-16 | 中国矿业大学 | Crowd real-time evacuation simulation method based on computer vision |
| CN115709331B (en) * | 2022-11-23 | 2024-08-27 | 山东大学 | Welding robot full-autonomous vision guiding method and system based on target detection |
| CN115741702B (en) * | 2022-11-23 | 2024-12-06 | 大连理工大学 | A binocular vision system error modeling method |
| CN115755948A (en) * | 2022-11-29 | 2023-03-07 | 国网安徽省电力有限公司超高压分公司 | Unmanned aerial vehicle path planning method and readable storage medium |
| CN115830118B (en) * | 2022-12-08 | 2024-03-19 | 重庆市信息通信咨询设计院有限公司 | Crack detection method and system for cement electric pole based on binocular camera |
| CN116129037B (en) * | 2022-12-13 | 2023-10-31 | 珠海视熙科技有限公司 | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof |
| CN116182747A (en) * | 2022-12-30 | 2023-05-30 | 佛山市南海区绿智电机设备有限公司 | A method and system for verticality detection of CNC machine tools based on binocular vision |
| CN116468659A (en) * | 2023-03-07 | 2023-07-21 | 西安建筑科技大学 | Intelligent smoke dust trapping method for magnesium reduction furnace based on binocular perception |
| CN116402886A (en) * | 2023-03-14 | 2023-07-07 | 大连海事大学 | A multi-eye vision three-dimensional measurement method for underwater robots |
| CN116442707B (en) * | 2023-03-15 | 2024-05-10 | 吉林大学 | Vehicle body vertical and pitch motion information estimation system and method based on binocular vision |
| CN116051613A (en) * | 2023-03-17 | 2023-05-02 | 济宁市兖州区自然资源局 | Real estate mapping method based on image analysis |
| CN116423505B (en) * | 2023-03-30 | 2024-04-23 | 杭州邦杰星医疗科技有限公司 | A method for calibrating the error of the robot registration module in robot navigation surgery |
| CN116787428B (en) * | 2023-04-12 | 2025-07-29 | 杭州云象商用机器有限公司 | Mobile robot safety protection method and device |
| CN116452878B (en) * | 2023-04-20 | 2024-02-02 | 广东工业大学 | An attendance method and system based on deep learning algorithm and binocular vision |
| CN116721339B (en) * | 2023-04-24 | 2024-04-30 | 广东电网有限责任公司 | Method, device, equipment and storage medium for detecting power transmission line |
| CN116698027A (en) * | 2023-04-27 | 2023-09-05 | 杭州电力设备制造有限公司 | Mobile map route searching method for power grid equipment inspection |
| CN116405644B (en) * | 2023-05-31 | 2024-01-12 | 湖南开放大学(湖南网络工程职业学院、湖南省干部教育培训网络学院) | A computer network equipment remote control system and method |
| CN116912403B (en) * | 2023-07-03 | 2024-05-10 | 玩出梦想(上海)科技有限公司 | XR device and obstacle information perception method of XR device |
| CN116834955A (en) * | 2023-07-18 | 2023-10-03 | 昆山合朗航空科技有限公司 | Fixed-point throwing method and system of infrared visual throwing device |
| CN116703984B (en) * | 2023-08-07 | 2023-10-10 | 福州和众信拓科技有限公司 | Robot path planning and infrared light image fusion method, system and storage medium |
| CN116721512B (en) * | 2023-08-10 | 2023-10-17 | 泰山学院 | Autonomous navigation robot environment perception control method and system |
| CN117061719B (en) * | 2023-08-11 | 2024-03-08 | 元橡科技(北京)有限公司 | Parallax correction method for vehicle-mounted binocular camera |
| CN116755451B (en) * | 2023-08-16 | 2023-11-07 | 泰山学院 | Intelligent patrol robot path planning method and system |
| CN117169872B (en) * | 2023-08-25 | 2024-03-26 | 广州珠观科技有限公司 | Robot autonomous navigation system based on stereo camera and millimeter wave radar information fusion |
| CN117011385A (en) * | 2023-08-28 | 2023-11-07 | 中国长江三峡集团有限公司 | A bolt posture calculation method, device, equipment and storage medium |
| CN116918593B (en) * | 2023-09-14 | 2023-12-01 | 众芯汉创(江苏)科技有限公司 | Binocular vision unmanned image-based power transmission line channel tree obstacle monitoring system |
| CN117132973B (en) * | 2023-10-27 | 2024-01-30 | 武汉大学 | A method and system for reconstruction and enhanced visualization of extraterrestrial planet surface environment |
| CN117422767B (en) * | 2023-10-30 | 2024-12-13 | 浙江大学 | A robust identification and positioning optimization method for guide lights in AUV docking process |
| CN117491355B (en) * | 2023-11-06 | 2024-07-02 | 广州航海学院 | Visual detection method for abrasion loss of three-dimensional curved surface of rake teeth type large component |
| CN117622262B (en) * | 2023-11-15 | 2024-07-09 | 北京交通大学 | Autonomous train sensing and positioning method and system |
| CN117400256B (en) * | 2023-11-21 | 2024-05-31 | 扬州鹏顺智能制造有限公司 | Industrial robot continuous track control method based on visual images |
| CN117311372B (en) * | 2023-11-30 | 2024-02-09 | 山东省科学院海洋仪器仪表研究所 | Autonomous obstacle avoidance system and method for underwater robots based on binocular stereo vision |
| CN117437563B (en) * | 2023-12-13 | 2024-03-15 | 黑龙江惠达科技股份有限公司 | Plant protection unmanned aerial vehicle dotting method, device and equipment based on binocular vision |
| CN117420276B (en) * | 2023-12-19 | 2024-02-27 | 上海瀚广科技(集团)有限公司 | Laboratory environment detection method and system based on spatial distribution |
| CN117921622B (en) * | 2024-03-25 | 2024-06-04 | 宁波昂霖智能装备有限公司 | Control method of robot for picking up garbage and robot for picking up garbage |
| CN118276585B (en) * | 2024-03-28 | 2024-11-22 | 北京晶品特装科技股份有限公司 | Automatic obstacle avoidance target recognition method for robots |
| WO2025199905A1 (en) * | 2024-03-28 | 2025-10-02 | 深圳市大疆创新科技有限公司 | Path planning method and apparatus, control method and apparatus, control terminal, and storage medium |
| CN118425903B (en) * | 2024-05-17 | 2025-04-25 | 西安交通大学 | Oversized plane/curved surface characteristic measurement method and system based on double unmanned aerial vehicle cooperative light source targets |
| CN118239385B (en) * | 2024-05-23 | 2024-08-02 | 河南省矿山起重机有限公司 | Intelligent steel coil hoisting system and method based on visual identification |
| CN118274845B (en) * | 2024-05-29 | 2024-08-20 | 天津地铁智慧科技有限公司 | Subway station robot inspection system and inspection method |
| CN118438450A (en) * | 2024-05-31 | 2024-08-06 | 深圳广川嵘兴信息科技有限公司 | Industrial robot automation control system |
| CN118674771A (en) * | 2024-06-24 | 2024-09-20 | 河海大学 | Autonomous inspection method for power equipment guided by double-view stereoscopic vision |
| CN118533182B (en) * | 2024-07-25 | 2024-09-17 | 山东鸿泽自动化技术有限公司 | Visual intelligent navigation method and system for transfer robot |
| CN118870297A (en) * | 2024-08-26 | 2024-10-29 | 天津大学合肥创新发展研究院 | Vision-based automatic positioning method and system for WiFi devices |
| CN118720571B (en) * | 2024-08-30 | 2025-04-08 | 河北工业大学 | Intelligent welding system and method based on three-dimensional space visual recognition and positioning |
| CN119088030A (en) * | 2024-09-05 | 2024-12-06 | 北京中联国成科技有限公司 | A path planning method and system for humanoid robot obstacle avoidance |
| CN119399432B (en) * | 2024-10-16 | 2025-06-20 | 广西电网能源科技有限责任公司 | Equipment operation data management method and system for laser obstacle removal |
| CN119218881B (en) * | 2024-12-02 | 2025-02-25 | 凯德技术长沙股份有限公司 | Crown block control method and crown block control system based on machine vision |
| CN119690119A (en) * | 2024-12-16 | 2025-03-25 | 浙江嘉创空天动力技术有限公司 | Unmanned aerial vehicle vision positioning and obstacle avoidance method and system |
| CN119314229B (en) * | 2024-12-17 | 2025-03-11 | 国能榆林能源有限责任公司 | Method, system and medium for detecting unsafe behaviors of tunneling working face personnel |
| CN119349422A (en) * | 2024-12-27 | 2025-01-24 | 杭州宇泛智能科技股份有限公司 | Hook trajectory prediction method and device based on IoT perception |
| CN119764975B (en) * | 2025-03-06 | 2025-07-01 | 深圳市大寰机器人科技有限公司 | BTB terminal buckling method and tool |
| CN120800340B (en) * | 2025-05-28 | 2026-01-02 | 北京慧享方略科技有限公司 | Intelligent robot inspection method and system based on SLAM navigation and multi-sensor fusion |
| CN120245004B (en) * | 2025-05-29 | 2025-10-28 | 江苏三铭智达科技有限公司 | Rescue robot autonomous identification positioning system and method based on industrial vision |
| CN120800343B (en) * | 2025-06-20 | 2026-02-06 | 青岛蓝海软通信息技术有限公司 | Human-shaped robot navigation method, system, storage medium and program product |
| CN120510598B (en) * | 2025-07-21 | 2025-09-19 | 时代天海科技有限公司 | Target tracking and identifying system based on marine automatic charging robot |
| CN120580671B (en) * | 2025-08-05 | 2025-09-26 | 广东海洋大学 | Intelligent agricultural robot pattern recognition method and obstacle avoidance system |
| CN120697047B (en) * | 2025-08-28 | 2025-11-04 | 中铁十四局集团有限公司 | Modularized mobile control system of tool changing robot |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5377106A (en) * | 1987-03-24 | 1994-12-27 | Fraunhofer Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Process for navigating an unmanned vehicle and a vehicle for the same |
| JP3994950B2 (en) * | 2003-09-19 | 2007-10-24 | ソニー株式会社 | Environment recognition apparatus and method, path planning apparatus and method, and robot apparatus |
| CN101852609B (en) * | 2010-06-02 | 2011-10-19 | 北京理工大学 | Ground obstacle detection method based on binocular stereo vision of robot |
| CN103413313B (en) * | 2013-08-19 | 2016-08-10 | 国家电网公司 | The binocular vision navigation system of electrically-based robot and method |
| CN103400392B (en) * | 2013-08-19 | 2016-06-22 | 山东鲁能智能技术有限公司 | Binocular vision navigation system and method based on Intelligent Mobile Robot |
-
2014
- 2014-06-16 WO PCT/CN2014/079912 patent/WO2015024407A1/en not_active Ceased
- 2014-06-16 CA CA2950791A patent/CA2950791C/en active Active
Also Published As
| Publication number | Publication date |
|---|---|
| WO2015024407A1 (en) | 2015-02-26 |
| CA2950791A1 (en) | 2015-02-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CA2950791C (en) | Binocular visual navigation system and method based on power robot | |
| CN103400392B (en) | Binocular vision navigation system and method based on Intelligent Mobile Robot | |
| CN103413313B (en) | The binocular vision navigation system of electrically-based robot and method | |
| EP3852064B1 (en) | Object labeling method and apparatus, movement control method and apparatus, device, and storage medium | |
| CN112734765B (en) | Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors | |
| Zhou et al. | Self‐supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain | |
| CN110097553A (en) | The semanteme for building figure and three-dimensional semantic segmentation based on instant positioning builds drawing system | |
| CN110246175A (en) | Intelligent Mobile Robot image detecting system and method for the panorama camera in conjunction with holder camera | |
| US12366660B2 (en) | System and method for detecting road intersection on point cloud height map | |
| CN113112491A (en) | Cliff detection method and device, robot and storage medium | |
| CN112232139A (en) | Obstacle avoidance method based on combination of Yolo v4 and Tof algorithm | |
| Li et al. | Judgment and optimization of video image recognition in obstacle detection in intelligent vehicle | |
| Wang et al. | Target detection for construction machinery based on deep learning and multisource data fusion | |
| Manivannan et al. | Vision based intelligent vehicle steering control using single camera for automated highway system | |
| Krawciw et al. | LaserSAM: Zero-shot change detection using visual segmentation of spinning LiDAR | |
| CN113298044B (en) | Obstacle detection method, system, device and storage medium based on positioning compensation | |
| Yang et al. | A novel navigation assistant method for substation inspection robot based on multisensory information fusion | |
| Kim et al. | Traffic Accident Detection Based on Ego Motion and Object Tracking | |
| Liu et al. | Research on security of key algorithms in intelligent driving system | |
| Lu et al. | Research on unmanned surface vessel perception algorithm based on multi-sensor fusion | |
| Kamalasanan et al. | Improving 3d pedestrian detection for wearable sensor data with 2d human pose | |
| Hou et al. | Research on GDR obstacle detection method based on stereo vision | |
| CN118366117A (en) | Data processing method and device | |
| Nayak et al. | BEV detection and localisation using semantic segmentation in autonomous car driving systems | |
| Aderoba et al. | Enhanced Lane Detection for Autonomous Campus Shuttles Using Hybrid Computer Vision Techniques |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| EEER | Examination request |
Effective date: 20161130 |