CN114488103A - Ranging system, ranging method, robot, equipment and storage medium - Google Patents
Ranging system, ranging method, robot, equipment and storage medium Download PDFInfo
- Publication number
- CN114488103A CN114488103A CN202111682866.1A CN202111682866A CN114488103A CN 114488103 A CN114488103 A CN 114488103A CN 202111682866 A CN202111682866 A CN 202111682866A CN 114488103 A CN114488103 A CN 114488103A
- Authority
- CN
- China
- Prior art keywords
- light
- target object
- ranging
- texture
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S11/00—Systems for determining distance or velocity not using reflection or reradiation
- G01S11/12—Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
Landscapes
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Measurement Of Optical Distance (AREA)
Abstract
本发明提出一种测距系统、测距方法、机器人、系统及存储介质,所述测距系统,包括:立体视觉装置、投影装置和处理装置;所述立体视觉装置包括至少两个摄像设备;所述投影装置包括发光器、图案掩膜以及镜头组;其中,所述发光器,用于发射光线;光线经过所述图案掩膜,形成带图案的光束;所述镜头组,用于将带图案的光束投影到目标物体上,在目标物体上形成纹理;所述摄像设备,用于对目标物体进行图像采集,得到纹理图像;所述处理装置,用于根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。本发明降低了现有测距装置的制造成本。
The present invention provides a ranging system, a ranging method, a robot, a system and a storage medium. The ranging system includes: a stereo vision device, a projection device and a processing device; the stereo vision device includes at least two camera devices; The projection device includes a light-emitting device, a pattern mask and a lens group; wherein, the light-emitting device is used to emit light; the light passes through the pattern mask to form a patterned light beam; the lens group is used to The light beam of the pattern is projected onto the target object to form a texture on the target object; the camera device is used for capturing an image of the target object to obtain a texture image; the processing device is used for capturing images according to the at least two camera devices The texture image determines the depth of the target object. The invention reduces the manufacturing cost of the existing distance measuring device.
Description
技术领域technical field
本发明涉及计算机视觉领域,尤其涉及一种测距系统、测距方法、机器人、设备及存储介质。The present invention relates to the field of computer vision, and in particular, to a ranging system, a ranging method, a robot, a device and a storage medium.
背景技术Background technique
双目立体视觉(BinocularStereoVision)是计算机视觉的一种重要形式,它是基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。现有的双目立体视觉技术成功计算物体距离的一个条件是,被测量物体必须拥有一定程度的纹理,否则它在两个相机的成像位置无法被算法检测到,也就无法进行匹配和后续的计算。Binocular StereoVision is an important form of computer vision. It is based on the principle of parallax and uses imaging equipment to obtain two images of the measured object from different positions, and obtains by calculating the positional deviation between the corresponding points of the images. Methods for 3D geometric information of objects. A condition for the existing binocular stereo vision technology to successfully calculate the distance of the object is that the object to be measured must have a certain degree of texture, otherwise it cannot be detected by the algorithm at the imaging position of the two cameras, and it cannot be matched and subsequent. calculate.
针对无纹理物体不能被双目视觉算法匹配的问题,现有技术主要使用VCSEL(垂直腔面发射激光器Vertical-Cavity Surface-Emitting Laser)结合DOE(衍射光学元件Diffractive optical element)来解决。VSCEL发射出的激光经过准直后成为平行光,再通过DOE投射出符合一定规律的稠密光斑,给物体表面覆盖上符合算法需求的稠密纹理。但基于VCSEL+DOE投射散斑为物体制造纹理的方案中,DOE采用超大规模集成电路制造工艺制作而成,且为了使纹理更清晰,排除环境光的影响,一般会采用红外光VCSEL,相机镜头侧也会滤除可见光,导致立体视觉算法可观察的距离比较近,也会损失物体原来的颜色和自身纹理等信息。For the problem that textureless objects cannot be matched by binocular vision algorithms, the prior art mainly uses VCSEL (Vertical-Cavity Surface-Emitting Laser) combined with DOE (Diffractive optical element) to solve the problem. The laser emitted by the VSCEL becomes parallel light after collimation, and then projects a dense light spot that conforms to a certain law through the DOE, covering the surface of the object with a dense texture that meets the requirements of the algorithm. However, in the scheme based on VCSEL+DOE projection speckle to create textures for objects, DOE is made by VLSI manufacturing process, and in order to make the texture clearer and eliminate the influence of ambient light, infrared light VCSELs are generally used, camera lenses The visible light will also be filtered out on the side, resulting in a relatively short distance observed by the stereo vision algorithm, and will also lose information such as the original color and texture of the object.
由上可知,现有技术会导致成本高昂以及视觉信息损失大的问题。It can be seen from the above that the existing technology will lead to problems of high cost and large loss of visual information.
发明内容SUMMARY OF THE INVENTION
本发明的主要目的在于提供一种测距系统、测距方法、机器人、设备及存储介质。旨在解决现有测距技术成本高昂以及视觉信息损失大的问题。The main purpose of the present invention is to provide a ranging system, a ranging method, a robot, a device and a storage medium. It aims to solve the problems of high cost of existing ranging technology and large loss of visual information.
为实现上述目的,本发明提供一种测距系统,包括:立体视觉装置、投影装置和处理装置;所述立体视觉装置包括至少两个摄像设备;所述投影装置包括发光器、图案掩膜以及镜头组;其中,In order to achieve the above object, the present invention provides a ranging system, including: a stereo vision device, a projection device and a processing device; the stereo vision device includes at least two camera devices; the projection device includes a light emitter, a pattern mask, and a lens group; of which,
所述发光器,用于发射光线;光线经过所述图案掩膜,形成带图案的光束;所述镜头组,用于将带图案的光束投影到目标物体上,在目标物体上形成纹理;the light emitter is used to emit light; the light passes through the pattern mask to form a patterned light beam; the lens group is used to project the patterned light beam onto the target object to form a texture on the target object;
所述摄像设备,用于对目标物体进行图像采集,得到纹理图像;The camera device is used for image acquisition of the target object to obtain a texture image;
所述处理装置,用于根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。The processing device is configured to determine the depth of the target object according to the texture images collected by the at least two camera devices.
可选地,所述图案掩膜上包括遮光区和透光区,所述透光区与所述遮光区形成图案,光线经过透光部分,形成带图案的光束。Optionally, the pattern mask includes a light-shielding area and a light-transmitting area, the light-transmitting area and the light-shielding area form a pattern, and the light passes through the light-transmitting portion to form a patterned light beam.
可选地,所述透光部分为透明材质制件;或,所述透光部分开设有多个透光孔;光线穿过所述图案掩膜,形成带图案的光束。Optionally, the light-transmitting part is made of a transparent material; or, the light-transmitting part is provided with a plurality of light-transmitting holes; the light passes through the pattern mask to form a patterned light beam.
可选地,所述投影装置还包括光线准直器,所述光线准直器设置在所述发光器与所述图案掩膜之间,用于将所述发光器发射的光线校准成平行光线。Optionally, the projection device further includes a light collimator, the light collimator is arranged between the light emitter and the pattern mask, and is used for collimating the light emitted by the light emitter into parallel light. .
此外,为实现上述目的,本发明还提供一种测距方法,所述测距方法应用于测距系统,所述测距系统包括立体视觉装置和投影装置;所述立体视觉装置包括至少两个摄像设备;所述投影装置包括发光器、图案掩膜以及镜头组;所述测距方法包括步骤:In addition, in order to achieve the above object, the present invention also provides a ranging method, which is applied to a ranging system, and the ranging system includes a stereo vision device and a projection device; the stereo vision device includes at least two Camera equipment; the projection device includes a light-emitting device, a pattern mask and a lens group; the ranging method includes the steps:
通过所述发光器发射光线;光线经过所述图案掩膜,形成带图案的光束;Light is emitted through the light emitter; the light passes through the pattern mask to form a patterned light beam;
通过所述镜头组将带图案的光束投影到目标物体上,在目标物体上形成纹理;Projecting the patterned light beam onto the target object through the lens group to form texture on the target object;
通过所述摄像设备对目标物体进行图像采集,得到纹理图像;Image acquisition of the target object by the camera device to obtain a texture image;
根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。The depth of the target object is determined according to the texture images collected by the at least two camera devices.
可选地,所述根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度的步骤包括:Optionally, the step of determining the depth of the target object according to the texture images collected by the at least two camera devices includes:
获取所述至少两个摄像设备的标定参数;acquiring calibration parameters of the at least two camera devices;
分别提取所述至少两个摄像设备采集的纹理图像在不同尺度下的特征信息;Respectively extract feature information of texture images at different scales collected by the at least two camera devices;
根据所述不同尺度下的特征信息,计算所述至少两个摄像设备采集的纹理图像之间相对应的匹配代价;According to the feature information at the different scales, calculating the matching cost corresponding to the texture images collected by the at least two camera devices;
对计算出的所述匹配代价进行代价聚合计算,根据计算后的结果得到目标视差值;Perform cost aggregation calculation on the calculated matching cost, and obtain the target disparity value according to the calculated result;
根据所述标定参数和所述目标视差值,计算所述目标物体的深度。According to the calibration parameter and the target parallax value, the depth of the target object is calculated.
可选地,所述根据所述不同尺度下的特征信息,计算所述至少两个摄像设备采集的纹理图像之间相对应的匹配代价的步骤包括:Optionally, the step of calculating the matching cost corresponding to the texture images collected by the at least two camera devices according to the feature information at different scales includes:
根据所述不同尺度下的特征信息,以预设视差搜索范围对所述至少两个摄像设备采集的纹理图像进行逐像素遍历,计算每个像素在不同视差下的匹配代价。According to the feature information at different scales, the texture images collected by the at least two camera devices are traversed pixel by pixel within a preset parallax search range, and the matching cost of each pixel under different parallaxes is calculated.
可选地,所述根据所述标定参数和所述目标视差值,计算所述目标物体的深度的步骤包括:Optionally, the step of calculating the depth of the target object according to the calibration parameter and the target parallax value includes:
根据所述至少两个摄像设备之间的相对距离、摄像设备焦距和目标视差值,计算每一个像素对应的物体深度值;Calculate the object depth value corresponding to each pixel according to the relative distance between the at least two imaging devices, the focal length of the imaging devices, and the target parallax value;
根据所述每一个像素对应的物体深度值,计算所述目标物体的深度。Calculate the depth of the target object according to the object depth value corresponding to each pixel.
此外,本发明还提供一种机器人,所述机器人包括上述的测距系统。In addition, the present invention also provides a robot comprising the above distance measuring system.
此外,本发明化提供一种测距设备,所述测距系统包括机器人、存储器、处理器、以及存储在所述存储器上并可在所述处理器上运行的测距程序,所述测距程序被处理器执行时实现如上述测距方法的步骤。In addition, the present invention provides a distance measuring device, the distance measuring system includes a robot, a memory, a processor, and a distance measuring program stored on the memory and executable on the processor, the distance measuring When the program is executed by the processor, the steps of the above-mentioned ranging method are implemented.
此外,本发明还提供一种存储介质,所述存储介质上存储有测距程序,所述测距程序被处理器执行时实现如上述的测距方法的步骤。In addition, the present invention also provides a storage medium on which a ranging program is stored, and when the ranging program is executed by a processor, implements the steps of the above-mentioned ranging method.
本发明提出一种测距系统、测距方法、机器人、系统及存储介质,所述测距系统包括:立体视觉装置、投影装置和处理装置;所述立体视觉装置包括至少两个摄像设备;所述投影装置包括发光器、图案掩膜以及镜头组;其中,所述发光器,用于发射光线;光线经过所述图案掩膜,形成带图案的光束;所述镜头组,用于将带图案的光束投影到目标物体上,在目标物体上形成纹理;所述摄像设备,用于对目标物体进行图像采集,得到纹理图像;所述处理装置,用于根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。所述测距方法包括步骤:通过所述发光器发射光线;光线经过所述图案掩膜,形成带图案的光束;通过所述镜头组将带图案的光束投影到目标物体上,在目标物体上形成纹理;通过所述摄像设备对目标物体进行图像采集,得到纹理图像;根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。通过上述结构以及方法,本发明能够实现对无纹理物体的距离测量,通过在无纹理物体上进行粗略纹理投影,进而测得机器人和物体之间的准确距离,解决了现有测距方法中,必须使用精确投影装置进行投影的弊端,以及现有投影装置中视觉信息损失大的问题,同时通过本申请中测距系统能够降低机器人的制造成本。The present invention provides a ranging system, a ranging method, a robot, a system and a storage medium. The ranging system includes: a stereo vision device, a projection device and a processing device; the stereo vision device includes at least two camera devices; The projection device includes a light emitter, a pattern mask and a lens group; wherein, the light emitter is used to emit light; the light passes through the pattern mask to form a patterned beam; the lens group is used to transmit the patterned light beam. The light beam is projected onto the target object to form a texture on the target object; the camera device is used to collect images of the target object to obtain a texture image; The texture image determines the depth of the target object. The distance measuring method includes the steps of: emitting light through the light emitter; the light passing through the pattern mask to form a patterned light beam; projecting the patterned light beam onto a target object through the lens group, on the target object forming a texture; capturing an image of a target object by the camera device to obtain a texture image; determining the depth of the target object according to the texture images collected by the at least two camera devices. Through the above structure and method, the present invention can realize the distance measurement of non-textured objects. By performing rough texture projection on the non-textured objects, the accurate distance between the robot and the object can be measured, thereby solving the problem of existing distance measurement methods. The disadvantages of having to use a precise projection device for projection, and the problem of large loss of visual information in the existing projection device, at the same time, the manufacturing cost of the robot can be reduced by the ranging system of the present application.
附图说明Description of drawings
图1是本发明实施例方案涉及的硬件运行环境的装置结构示意图;1 is a schematic diagram of a device structure of a hardware operating environment involved in an embodiment of the present invention;
图2为本发明测距系统的结构示意图;2 is a schematic structural diagram of a ranging system of the present invention;
图3为本发明测距系统中投影装置的结构示意图;3 is a schematic structural diagram of a projection device in the ranging system of the present invention;
图4为本发明测距方法第一实施例的流程示意图;FIG. 4 is a schematic flowchart of the first embodiment of the ranging method according to the present invention;
图5为本发明测距方法第一实施例中步骤S40的细化流程示意图;5 is a schematic diagram of a refinement flow of step S40 in the first embodiment of the ranging method of the present invention;
图6为本发明测距方法第一实施例中步骤S45的细化流程示意图。FIG. 6 is a schematic diagram of a refinement flow of step S45 in the first embodiment of the ranging method of the present invention.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics and advantages of the present invention will be further described with reference to the accompanying drawings in conjunction with the embodiments.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
本申请涉及的机器人可以包括清洁机器人、家政机器人、服务机器人、物流机器人等等,具体地,清洁机器人的类型可包括扫地机器人、拖地机器人、扫拖一体机器人等等,其中,清洁机器人可用于对地面进行自动清洁,应用场景可以为家庭室内清洁、大型场所清洁等。下面以清洁机器人为例进行描述:The robots involved in this application may include cleaning robots, housekeeping robots, service robots, logistics robots, etc. Specifically, the types of cleaning robots may include sweeping robots, mopping robots, sweeping and mopping integrated robots, etc., wherein the cleaning robots can be used for The ground is automatically cleaned, and the application scenarios can be household indoor cleaning, large-scale venue cleaning, etc. The following takes the cleaning robot as an example to describe:
在清洁机器人上,设有清洁件和驱动装置,驱动装置可包括电机和驱动轮。在驱动装置的驱动下,清洁机器人根据预设的清洁路径进行自移动,并通过清洁件清洁地面。对于扫地清洁机器人来说,清洁件为扫地件,扫地清洁机器人上设置有吸尘装置,在清洁过程中,扫地件将灰尘、垃圾等扫到吸尘装置的吸尘口,从而吸尘装置将灰尘、垃圾等吸收暂存。对于清洁机器人来说,清洁件为拖擦件(例如拖布),该拖擦件与地面接触,在清洁机器人移动过程中,该拖擦件对地面进行拖擦,实现对地面的清洁。对于扫拖一体清洁机器人来说,清洁件包括扫地件和拖擦件,扫地件和拖擦件可以同时工作,进行拖地和扫地,也可以分开工作,分别进行拖地和扫地。其中,扫地件进一步可以包括边刷和滚刷(又可称中刷),边刷在外侧将灰尘等垃圾扫到中间区域,滚刷继续将垃圾清扫至吸尘装置。On the cleaning robot, a cleaning member and a driving device are provided, and the driving device may include a motor and a driving wheel. Driven by the driving device, the cleaning robot moves itself according to the preset cleaning path, and cleans the ground through the cleaning piece. For the sweeping cleaning robot, the cleaning piece is a sweeping piece, and the sweeping cleaning robot is provided with a vacuuming device. During the cleaning process, the sweeping piece sweeps dust, garbage, etc. to the suction port of the vacuuming device, so that the vacuuming device will Absorb and temporarily store dust, garbage, etc. For the cleaning robot, the cleaning member is a mopping member (such as a mop), and the mopping member is in contact with the ground. During the movement of the cleaning robot, the mopping member rubs the ground to clean the ground. For the sweeping and mopping integrated cleaning robot, the cleaning parts include sweeping parts and mopping parts. The sweeping parts and the mopping parts can work at the same time for mopping and sweeping, or they can work separately, mopping and sweeping respectively. The sweeping member may further include a side brush and a roller brush (also called a middle brush). The side brush sweeps dust and other garbage from the outside to the middle area, and the roller brush continues to sweep the garbage to the vacuuming device.
为了方便用户的使用,往往通过基站配合清洁机器人的使用,该基站可用于对清洁机器人进行充电,当在清洁过程中,清洁机器人的电量少于阈值时,清洁机器人自动移动到基站处,进行充电。对于清洁机器人来说,基站还可以清洁拖擦件(如拖布),清洁机器人的拖擦件拖擦地面后,拖擦件往往变得脏污,需要对其进行清洗。为此,基站可用于对清洁机器人的拖擦件进行清洗。具体来说,拖擦清洁机器人可移动到基站上,从而基站上的清洁机构对清洁机器人的拖擦件进行自动清洗。基站除了实现上述功能,还可以通过基站对机器人进行管理,使机器人在执行清洁任务的过程中,更加智能地对机器人进行控制,提高机器人工作的智能性。In order to facilitate the use of users, the cleaning robot is often used in conjunction with the base station. The base station can be used to charge the cleaning robot. When the power of the cleaning robot is less than the threshold during the cleaning process, the cleaning robot automatically moves to the base station for charging. . For the cleaning robot, the base station can also clean the mopping member (such as a mop). After the mopping member of the cleaning robot wipes the ground, the mopping member often becomes dirty and needs to be cleaned. For this purpose, the base station can be used to clean the mopping parts of the cleaning robot. Specifically, the mopping and cleaning robot can be moved to the base station, so that the cleaning mechanism on the base station can automatically clean the mopping parts of the cleaning robot. In addition to the above functions, the base station can also manage the robot through the base station, so that the robot can control the robot more intelligently during the process of performing cleaning tasks, and improve the intelligence of the robot's work.
如图1所示,图1是本发明实施例方案涉及的硬件运行环境的机器人结构示意图。As shown in FIG. 1 , FIG. 1 is a schematic structural diagram of a robot of a hardware operating environment involved in an embodiment of the present invention.
本发明实施例的主要解决方案是:通过所述发光器发射光线;光线经过所述图案掩膜,形成带图案的光束;通过所述镜头组将带图案的光束投影到目标物体上,在目标物体上形成纹理;通过所述摄像设备对目标物体进行图像采集,得到纹理图像;根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。The main solutions of the embodiments of the present invention are: emit light through the light emitter; the light passes through the pattern mask to form a patterned light beam; the patterned light beam is projected onto the target object through the lens group, and the patterned light beam is projected on the target object through the lens group. A texture is formed on the object; an image of the target object is acquired by the camera device to obtain a texture image; the depth of the target object is determined according to the texture images collected by the at least two camera devices.
由于在现有技术中,在清洁机器人移动的过程中,经常会遇到障碍物(例如垃圾桶、墙等),在其经过障碍物时,清洁机器人经常会无法识别障碍物,而与障碍物产生碰撞,影响机器人的正常运行。Because in the prior art, in the process of moving the cleaning robot, it often encounters obstacles (such as trash cans, walls, etc.), and when it passes through the obstacles, the cleaning robot often fails to recognize the obstacles, and is different from the obstacles. A collision occurs, which affects the normal operation of the robot.
本发明提供上述解决方案,旨在获取障碍物的信息,避免清洁机器人与障碍物产生碰撞,保证机器人的正常运行。The present invention provides the above solution, aiming at obtaining information on obstacles, avoiding collision between the cleaning robot and obstacles, and ensuring the normal operation of the robot.
本发明提出一种清洁机器人,可以是扫地机器人,拖地机器人等用于环境清洁的自动化设备。此外,在其他实施例中,机器人也可为其他类型的机器人,例如服务机器人等。The present invention proposes a cleaning robot, which can be an automated device for environmental cleaning, such as a sweeping robot, a mopping robot, and the like. In addition, in other embodiments, the robot may also be other types of robots, such as service robots and the like.
如图1所示,该机器人可以包括:处理器1001,例如CPU,通信总线1002,用户接口1003,网络接口1004,存储器1005和感知单元1006。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如Wi-Fi接口)。As shown in FIG. 1 , the robot may include: a
存储器1005设置在机器人主体上,存储器1005上存储有程序,该程序被处理器1001执行时实现相应的操作。存储器1005还用于存储供机器人使用的参数。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。The
机器人可通过网络接口1004与用户终端进行通信。机器人还可通过短距离通信技术与基站进行通信。其中,基站为配合机器人使用的清洁设备。The robot can communicate with the user terminal through the
感知单元1006包括各种类型的传感器,例如激光雷达、碰撞传感器、距离传感器、跌落传感器、计数器、和陀螺仪等。The sensing unit 1006 includes various types of sensors, such as lidar, collision sensors, distance sensors, drop sensors, counters, and gyroscopes, among others.
激光雷达设置在机器人主体的顶部,在工作时,激光雷达旋转,并通过激光雷达上的发射器发射激光信号,激光信号被障碍物反射,从而激光雷达的接收器接收障碍物反射回的激光信号。激光雷达的电路单元通过对接收的激光信号进行分析,可得到周围的环境信息,例如障碍物相对激光雷达的距离和角度等。此外,也可用摄像头替代激光雷达,通过对摄像头拍摄的图像中的障碍物进行分析,也可得到障碍物相对摄像头的距离、角度等。The lidar is set on the top of the robot body. When working, the lidar rotates and emits a laser signal through the transmitter on the lidar. The laser signal is reflected by the obstacle, so that the receiver of the lidar receives the laser signal reflected by the obstacle. . By analyzing the received laser signal, the circuit unit of the lidar can obtain information about the surrounding environment, such as the distance and angle of obstacles relative to the lidar. In addition, a camera can also be used to replace the lidar, and by analyzing the obstacles in the images captured by the camera, the distance and angle of the obstacles relative to the camera can also be obtained.
碰撞传感器包括碰撞壳体和触发传感器。碰撞壳体包绕机器人主体的头部,具体来说,碰撞壳体可设置在机器人主体的头部和机器人主体的左右两侧的靠前位置。触发传感器设置在机器人主体内部且位于碰撞壳体之后。在碰撞壳体和机器人主体之间设有弹性缓冲件。当机器人通过碰撞壳体与障碍物碰撞时,碰撞壳体向机器人内部移动,且压缩弹性缓冲件。在碰撞壳体向机器人内部移动一定距离后,碰撞壳体与触发传感器接触,触发传感器被触发产生信号,该信号可发送到机器人主体内的机器人控制器,以进行处理。在碰完障碍物后,机器人远离障碍物,在弹性缓冲件的作用下,碰撞壳体移回原位。可见,碰撞传感器可对障碍物进行检测,以及当碰撞到障碍物后,起到缓冲作用。Crash sensors include crash housings and trigger sensors. The collision shell surrounds the head of the robot body. Specifically, the collision shell may be disposed at the head of the robot body and the front positions of the left and right sides of the robot body. The trigger sensor is located inside the robot body and behind the collision shell. An elastic buffer is provided between the collision shell and the robot body. When the robot collides with the obstacle through the collision shell, the collision shell moves into the robot and compresses the elastic buffer. After the collision shell moves a certain distance into the robot, the collision shell contacts the trigger sensor, and the trigger sensor is triggered to generate a signal, which can be sent to the robot controller in the robot body for processing. After hitting the obstacle, the robot moves away from the obstacle, and under the action of the elastic buffer, the collision shell moves back to its original position. It can be seen that the collision sensor can detect obstacles and play a buffering role after hitting the obstacle.
距离传感器具体可以为红外探测传感器,可用于探测障碍物至距离传感器的距离。距离传感器可设置在机器人主体的侧面,从而通过距离传感器可测出位于机器人侧面附近的障碍物至距离传感器的距离值。距离传感器也可以是超声波测距传感器、激光测距传感器或者深度传感器等,此处不做限制。Specifically, the distance sensor may be an infrared detection sensor, which may be used to detect the distance from the obstacle to the distance sensor. The distance sensor can be arranged on the side of the robot body, so that the distance value from the obstacle located near the side of the robot to the distance sensor can be measured by the distance sensor. The distance sensor may also be an ultrasonic ranging sensor, a laser ranging sensor, or a depth sensor, etc., which is not limited here.
跌落传感器可设置在机器人主体的底部边缘,数量可以为一个或多个。当机器人移动到地面的边缘位置时,通过跌落传感器可探测出机器人有从高处跌落的风险,从而执行相应的防跌落反应,例如机器人停止移动、或往远离跌落位置的方向移动等。The drop sensors can be arranged on the bottom edge of the robot body, and the number can be one or more. When the robot moves to the edge of the ground, the drop sensor can detect the risk of the robot falling from a height, so as to execute the corresponding anti-drop response, such as the robot stops moving, or moves away from the falling position, etc.
在机器人主体的内部还设有计数器和陀螺仪。计数器用于对驱动轮的转动角度总数进行累计,以计算出驱动轮驱动机器人移动的距离长度。陀螺仪用于检测机器人转动的角度,从而可确定出机器人的朝向。There are also counters and gyroscopes inside the robot body. The counter is used to accumulate the total number of rotation angles of the driving wheel, so as to calculate the length of the distance that the driving wheel drives the robot to move. The gyroscope is used to detect the angle of rotation of the robot, so that the orientation of the robot can be determined.
本领域技术人员可以理解,图1中示出的机器人结构并不构成对机器人的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the robot structure shown in FIG. 1 does not constitute a limitation to the robot, and may include more or less components than the one shown, or combine some components, or arrange different components.
如图1所示作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及机器人的测距程序。As shown in FIG. 1 , the
在图1所示的机器人中,网络接口1004主要用于连接与机器人配套使用的基站、充电座等,与基站进行数据通信,其中,基站可用于对机器人进行充电,以及针对该机器人上的拖布进行清洁等;用户接口1003主要用于连接客户端,与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的机器人的测距程序,并执行以下操作:In the robot shown in FIG. 1 , the
所述测距方法包括步骤:The ranging method includes the steps:
通过所述发光器发射光线;光线经过所述图案掩膜,形成带图案的光束;Light is emitted through the light emitter; the light passes through the pattern mask to form a patterned light beam;
通过所述镜头组将带图案的光束投影到目标物体上,在目标物体上形成纹理;Projecting the patterned light beam onto the target object through the lens group to form texture on the target object;
通过所述摄像设备对目标物体进行图像采集,得到纹理图像;Image acquisition of the target object by the camera device to obtain a texture image;
根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。The depth of the target object is determined according to the texture images collected by the at least two camera devices.
进一步地,处理器1001可以调用存储器1006中存储的测距程序,还执行以下操作:Further, the
所述根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度的步骤包括:The step of determining the depth of the target object according to the texture images collected by the at least two camera devices includes:
获取所述至少两个摄像设备的标定参数;acquiring calibration parameters of the at least two camera devices;
分别提取所述至少两个摄像设备采集的纹理图像在不同尺度下的特征信息;Respectively extract feature information of texture images at different scales collected by the at least two camera devices;
根据所述不同尺度下的特征信息,计算所述至少两个摄像设备采集的纹理图像之间相对应的匹配代价;According to the feature information at the different scales, calculating the matching cost corresponding to the texture images collected by the at least two camera devices;
对计算出的所述匹配代价进行代价聚合计算,根据计算后的结果得到目标视差值;Perform cost aggregation calculation on the calculated matching cost, and obtain the target disparity value according to the calculated result;
根据所述标定参数和所述目标视差值,计算所述目标物体的深度。According to the calibration parameter and the target parallax value, the depth of the target object is calculated.
进一步地,处理器1001可以调用存储器1006中存储的测距程序,还执行以下操作:Further, the
所述根据所述不同尺度下的特征信息,计算所述至少两个摄像设备采集的纹理图像之间相对应的匹配代价的步骤包括:The step of calculating the corresponding matching cost between the texture images collected by the at least two camera devices according to the feature information at different scales includes:
根据所述不同尺度下的特征信息,以预设视差搜索范围对所述至少两个摄像设备采集的纹理图像进行逐像素遍历,计算每个像素在不同视差下的匹配代价。According to the feature information at different scales, the texture images collected by the at least two camera devices are traversed pixel by pixel within a preset parallax search range, and the matching cost of each pixel under different parallaxes is calculated.
进一步地,处理器1001可以调用存储器1006中存储的测距程序,还执行以下操作:Further, the
所述根据所述标定参数和所述目标视差值,计算所述目标物体的深度的步骤包括:The step of calculating the depth of the target object according to the calibration parameter and the target parallax value includes:
根据所述至少两个摄像设备之间的相对距离、摄像设备焦距和目标视差值,计算每一个像素对应的物体深度值;Calculate the object depth value corresponding to each pixel according to the relative distance between the at least two imaging devices, the focal length of the imaging devices, and the target parallax value;
根据所述每一个像素对应的物体深度值,计算所述目标物体的深度。Calculate the depth of the target object according to the object depth value corresponding to each pixel.
基于上述硬件结构,提出本发明测距系统的各个实施例。Based on the above hardware structure, various embodiments of the ranging system of the present invention are proposed.
请参阅图2以及图3,图2为本发明测距系统的结构示意图,图3为本发明测距系统中投影装置的结构示意图。所述测距系统包括:立体视觉装置01、投影装置02和处理装置03;所述立体视觉装置01包括至少两个摄像设备;所述投影装置02包括发光器021、图案掩膜022以及镜头组023;其中,所述发光器021,用于发射光线;光线经过所述图案掩膜022,形成带图案的光束;所述镜头组023,用于将带图案的光束投影到目标物体上,在目标物体上形成纹理;所述摄像设备,用于对目标物体进行图像采集,得到纹理图像;所述处理装置03,用于根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。Please refer to FIG. 2 and FIG. 3 , FIG. 2 is a schematic structural diagram of a ranging system of the present invention, and FIG. 3 is a structural schematic diagram of a projection device in the ranging system of the present invention. The ranging system includes: a
在本实施例中,所述投影装置02可应用于机器人上,可安装于机器人,也可安装与需要投影测距的场景,例如,无纹理的白墙、无纹理的家具等,安装方式灵活,可应用于各种需要进行测距的场合。需要说明的是立体视觉装置01与投影装置02之间的关系需要满足投影装置02的投影区域与立体数视觉装置即摄像设备的视觉覆盖区域存在部分或全部重合,其具体排列方式可以为所述立体视觉装置01与所述投影装置02可装配在同一平面上,具体可以为投影装置02设置在中间,立体视觉装置01设置在两边,即两个摄像设备设置在投影装置02的两边;也可以投影装置02在旁边,立体视觉装置01在中间;又或者立体视觉装置01中的两个摄像设备与投影装置02为三角形位置关系,或者立体视觉装置01中多个摄像设备与投影装置02呈五筒排列位置关系等,本申请在此不做限制。另外,立体视觉装置01与投影装置02也可不设置于同一平面上,同样只需满足投影装置02的投影区域与立体数视觉装置即摄像设备的的视觉覆盖区域存在部分或全部重合的条件即可。所述发光器021可以选用LED或VCSEL,实现光线的产生。所述镜头组023用于根据预设角度将带图案的光束投影到目标物体上。在本实施例中,通过上述结构可代替现有技术中成本较高的VCSEL+DOE现有方案,避免了现有方案中视觉信息损失大的情况,且本方案测距距离远,便于结合测距算法开发更多的视觉应用,如语义分割、目标检测等。In this embodiment, the
在一实施例中,所述图案掩膜022上包括遮光区和透光区,所述透光区与所述遮光区形成图案,光线通过透光部分,形成带图案的光束。In one embodiment, the
在一实施例中,所述透光部分为透明材料制件;或,所述透光部分开设有多个透光孔;光线穿过所述图案掩膜022,形成带图案的光束。In one embodiment, the light-transmitting portion is made of a transparent material; or, the light-transmitting portion is provided with a plurality of light-transmitting holes; the light passes through the
在本实施例中,可以把提前设计好的图案以激光印刷或蚀刻的方法,附加在透明介质上,例如玻璃。则激光印刷或蚀刻部分形成透光区,而未激光印刷或蚀刻部分则形成透光区,当光线通过所述图案掩膜022时,印刷或蚀刻部分区域会被遮挡,从而透过透光区形成带图案的光束。或者,采用镂空方法,在图案掩膜022上形成带图案的镂空部分,光线通过镂空部分时,则会形成带图案的光束。在本实施例中,通过上述方式形成带图案的光束,制造成本较低,且形成的图案清晰,能够满足不同情形下的纹路投影,使得测距更加精确。In this embodiment, the pattern designed in advance can be attached to a transparent medium, such as glass, by means of laser printing or etching. The laser-printed or etched part forms a light-transmitting area, while the non-laser-printed or etched part forms a light-transmitting area. When the light passes through the
在一实施例中,所述投影装置02还包括光线准直器,所述光线准直器设置在所述发光器021与所述图案掩膜022之间,用于将所述发光器021发射的光线校准成平行光线。在本实施例中,通过所述光线准直器能够使投影装置02的投影图案更加准确以及清晰。In one embodiment, the
请参阅图4,图4为本发明测距方法第一实施例的流程示意图,所述测距方法应用于测距系统,所述测距系统包括立体视觉装置和投影装置;所述立体视觉装置包括至少两个摄像设备;所述投影装置包括发光器、图案掩膜以及镜头组;本实施例提供的测距方法包括如下步骤:Please refer to FIG. 4. FIG. 4 is a schematic flowchart of a first embodiment of a ranging method according to the present invention. The ranging method is applied to a ranging system, and the ranging system includes a stereo vision device and a projection device; the stereo vision device It includes at least two imaging devices; the projection device includes a light emitter, a pattern mask and a lens group; the ranging method provided by this embodiment includes the following steps:
步骤S10,通过所述发光器发射光线;光线经过所述图案掩膜,形成带图案的光束;Step S10, emit light through the light emitter; the light passes through the pattern mask to form a patterned beam;
步骤S20,通过所述镜头组将带图案的光束投影到目标物体上,在目标物体上形成纹理;Step S20, projecting the patterned light beam onto the target object through the lens group to form a texture on the target object;
在本实施例中,所述镜头组通过预设角度将带图案的光束投影到目标物体上,所述目标物体为需要进行测距的物体。In this embodiment, the lens group projects a patterned light beam onto a target object through a preset angle, and the target object is an object that needs to be measured.
步骤S30,通过所述摄像设备对目标物体进行图像采集,得到纹理图像;Step S30, performing image acquisition on the target object by the camera device to obtain a texture image;
在本实施例中,所述摄像设备的数量为至少两个,当然,也可以设置多个摄像设备进行图像采集,本发明在此不作限制,为方便叙述,本发明采用两个摄像设备进行图像采集。In this embodiment, the number of the camera devices is at least two. Of course, multiple camera devices can also be set for image acquisition, which is not limited in the present invention. For the convenience of description, the present invention adopts two camera devices to perform image capture. collection.
步骤S40,根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。Step S40, determining the depth of the target object according to the texture images collected by the at least two imaging devices.
在一实施例中,参阅图5,所述步骤S40还包括:In one embodiment, referring to FIG. 5 , the step S40 further includes:
步骤S41,获取所述至少两个摄像设备的标定参数;Step S41, acquiring calibration parameters of the at least two camera devices;
步骤S42,分别提取所述至少两个摄像设备采集的纹理图像在不同尺度下的特征信息;Step S42, respectively extracting the feature information of the texture images collected by the at least two camera devices at different scales;
在本实施例中,所述标定参数包括摄像设备焦距、光心、畸变系数、两个或多个摄像设备之间的相对位置。所述不同尺度包括宏观大尺度和微观小尺度,具体的,可以设置最小尺度与最大尺度,然后将最小尺度和最大尺度进行均等分,再获取每一均等分下的尺度的纹理图像中的特征信息,或者采用图像金字塔,将纹理图像的分辨率通过梯次向下采样获得不同分辨率即不同尺度的纹理图像,其中,金字塔层级越高,则图像尺度越小,分辨率越低。所述特征信息可以根据卷积神经网络获取。另外,在获取到两个纹理图像后,还需将所述纹理图像进行图像立体校正,即把实际拍摄后的不在同一平面上、且同一空间点的两幅图像,校正成共面行对准,以使得匹配像素搜索从二维搜索变为一维搜索,从而提高匹配搜索的效率。In this embodiment, the calibration parameters include the focal length of the imaging device, the optical center, the distortion coefficient, and the relative position between two or more imaging devices. The different scales include macroscopic large scales and microscopic small scales. Specifically, the minimum scale and the maximum scale can be set, and then the minimum scale and the maximum scale are equally divided, and then the features in the texture image of each equally divided scale are obtained. information, or use an image pyramid to obtain texture images of different resolutions, that is, texture images of different scales, by downsampling the resolution of the texture image. The higher the pyramid level, the smaller the image scale and the lower the resolution. The feature information can be obtained according to a convolutional neural network. In addition, after acquiring the two texture images, it is necessary to perform image stereo correction on the texture images, that is, the two images that are not on the same plane and have the same spatial point after actual shooting are corrected into coplanar alignment. , so that the matching pixel search is changed from a two-dimensional search to a one-dimensional search, thereby improving the efficiency of the matching search.
步骤S43,根据所述不同尺度下的特征信息,计算所述至少两个摄像设备采集的纹理图像之间相对应的匹配代价;Step S43, according to the feature information under the different scales, calculate the matching cost corresponding to the texture images collected by the at least two camera devices;
在本实施例中,代价计算是寻找不同纹理图像上同一纹理的对应位置关系的过程。In this embodiment, the cost calculation is a process of finding the corresponding positional relationship of the same texture on different texture images.
在一实施例中,所述步骤S43还包括:In one embodiment, the step S43 further includes:
步骤A431,根据所述不同尺度下的特征信息,以预设视差搜索范围对所述至少两个摄像设备采集的纹理图像进行逐像素遍历,计算每个像素在不同视差下的匹配代价。Step A431: Perform pixel-by-pixel traversal on the texture images collected by the at least two camera devices with a preset parallax search range according to the feature information at different scales, and calculate the matching cost of each pixel under different parallaxes.
在本实施例中,所述视差为两张纹理图像中,两个匹配像素的像素坐标的距离,其中距离是以像素为单位,匹配代价计算的目的是衡量待匹配像素与候选像素之间的相关性。两个像素无论是否为同名点,都可以通过匹配代价函数计算匹配代价,代价越小则说明相关性越大,是同名点的概率也越大,所述同名点即为相同点。In this embodiment, the parallax is the distance between the pixel coordinates of two matching pixels in the two texture images, where the distance is in pixels, and the purpose of calculating the matching cost is to measure the distance between the pixel to be matched and the candidate pixel. Correlation. No matter whether two pixels are the same name point, the matching cost can be calculated by the matching cost function. The smaller the cost, the greater the correlation, and the greater the probability of being the same name point. The same name point is the same point.
步骤S44,对计算出的所述匹配代价进行代价聚合计算,根据计算后的结果得到目标视差值;Step S44, performing cost aggregation calculation on the calculated matching cost, and obtaining a target disparity value according to the calculated result;
在本市实施例中,在单独计算所有像素的代价后,进一步对全图的代价分布进行整体的优化,可以更有效避免由于纹理稀疏而找不到正确图像对应的问题。例如,可能存在第一纹理图像中的第i个像素点,与第二纹理图像中的两个以上不同像素点之间位置关系对应的代价相同或差不多,需要进一步确定哪一像素点为正确的代价匹配结果,因此,可以利用周围像素的代价进行确定,具体可对多尺度代价进行全局或半全局代价进行聚合,得到优化后的代价。In the embodiment of this city, after the cost of all pixels is calculated separately, the overall optimization of the cost distribution of the whole image is further performed, which can more effectively avoid the problem that the correct image correspondence cannot be found due to sparse texture. For example, there may be the i-th pixel in the first texture image, and the cost corresponding to the positional relationship between two or more different pixels in the second texture image is the same or similar. It is necessary to further determine which pixel is correct. Therefore, the cost of the surrounding pixels can be used to determine the cost. Specifically, the global or semi-global cost can be aggregated for the multi-scale cost to obtain the optimized cost.
步骤S45,根据所述标定参数和所述目标视差值,计算所述目标物体的深度。Step S45: Calculate the depth of the target object according to the calibration parameter and the target parallax value.
在一实施例中,参阅图6,所述步骤S45还包括:In one embodiment, referring to FIG. 6 , the step S45 further includes:
步骤S451,根据所述至少两个摄像设备之间的相对距离、摄像设备焦距和目标视差值,计算每一个像素对应的物体深度值;Step S451: Calculate the object depth value corresponding to each pixel according to the relative distance between the at least two imaging devices, the focal length of the imaging devices, and the target parallax value;
所述每一像素对应的物体深度值指的是,纹理图像中每一个像素与目标物体之间的距离,具体的,根据以下公式计算每一个像素对应的物体深度值,The object depth value corresponding to each pixel refers to the distance between each pixel in the texture image and the target object. Specifically, the object depth value corresponding to each pixel is calculated according to the following formula:
z=Fb/d;z=Fb/d;
其中,z为深度距离,F为相机焦距,b为纹理图像对应的相机之间的相对距离,d为不同相机的纹理图像上同一物体的像素点之间的目标视差值。Among them, z is the depth distance, F is the focal length of the camera, b is the relative distance between the cameras corresponding to the texture image, and d is the target disparity value between the pixels of the same object on the texture images of different cameras.
步骤S452,根据所述每一个像素对应的物体深度值,计算所述目标物体的深度;Step S452, calculating the depth of the target object according to the object depth value corresponding to each pixel;
在本实施例中,根据每一像素的物体深度值,可直接计算出目标物体的整体深度值,即机器人到目标物体的距离。可以取所有像素最小的深度值作为目标物体的深度,也可根据物体的整体深度进一步选取目标物体的深度。In this embodiment, according to the object depth value of each pixel, the overall depth value of the target object can be directly calculated, that is, the distance from the robot to the target object. The minimum depth value of all pixels can be taken as the depth of the target object, or the depth of the target object can be further selected according to the overall depth of the object.
本发明提出一种测距方法,所述测距方法包括步骤:通过所述发光器发射光线;光线经过所述图案掩膜,形成带图案的光束;通过所述镜头组将带图案的光束投影到目标物体上,在目标物体上形成纹理;通过所述摄像设备对目标物体进行图像采集,得到纹理图像;根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。通过上述结构以及方法,本发明能够实现对无纹理物体的距离测量,通过在无纹理物体上进行粗略纹理投影,进而测得机器人和物体之间的准确距离,解决了现有测距方法中,必须使用精确投影装置进行投影的弊端,以及现有投影装置中视觉信息损失大的问题,同时通过本申请中测距系统能够降低机器人的制造成本。以及通过本发明中的测距系统和测距方法的结合,可以大幅降低对物体纹理精细度的需求。在此前提下,本发明所使用的测距装置取代VCSEL+DOE给原本无纹理的物体增加一定质量的纹理是具备可行性的,降低了测距装置的制造成本。The present invention provides a ranging method, which comprises the steps of: emitting light through the light emitter; passing the pattern mask to form a patterned light beam; projecting the patterned light beam through the lens group onto the target object, forming a texture on the target object; capturing images of the target object by the camera device to obtain a texture image; determining the depth of the target object according to the texture images collected by the at least two camera devices. Through the above structure and method, the present invention can realize the distance measurement of non-textured objects. By performing rough texture projection on the non-textured objects, the accurate distance between the robot and the object can be measured, thereby solving the problem of existing distance measurement methods. The disadvantages of having to use a precise projection device for projection, and the problem of large loss of visual information in the existing projection device, at the same time, the manufacturing cost of the robot can be reduced by the ranging system of the present application. And through the combination of the ranging system and the ranging method in the present invention, the requirement for the fineness of the object texture can be greatly reduced. Under this premise, it is feasible that the distance measuring device used in the present invention replaces VCSEL+DOE to add a certain quality of texture to the original textureless object, which reduces the manufacturing cost of the distance measuring device.
此外,本发明实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有测距程序,所述测距程序被处理器执行时实现如下操作:In addition, an embodiment of the present invention also provides a computer-readable storage medium, where a ranging program is stored on the computer-readable storage medium, and when the ranging program is executed by a processor, the following operations are implemented:
通过所述发光器发射光线;光线经过所述图案掩膜,形成带图案的光束;Light is emitted through the light emitter; the light passes through the pattern mask to form a patterned light beam;
通过所述镜头组将带图案的光束投影到目标物体上,在目标物体上形成纹理;Projecting the patterned light beam onto the target object through the lens group to form texture on the target object;
通过所述摄像设备对目标物体进行图像采集,得到纹理图像;Image acquisition of the target object by the camera device to obtain a texture image;
根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度。The depth of the target object is determined according to the texture images collected by the at least two camera devices.
进一步地,所述测距程序被处理器执行时还实现如下操作:Further, when the ranging program is executed by the processor, the following operations are also implemented:
所述根据所述至少两个摄像设备采集的纹理图像确定所述目标物体的深度的步骤包括:The step of determining the depth of the target object according to the texture images collected by the at least two camera devices includes:
获取所述至少两个摄像设备的标定参数;acquiring calibration parameters of the at least two camera devices;
分别提取所述至少两个摄像设备采集的纹理图像在不同尺度下的特征信息;Respectively extract feature information of texture images at different scales collected by the at least two camera devices;
根据所述不同尺度下的特征信息,计算所述至少两个摄像设备采集的纹理图像之间相对应的匹配代价;According to the feature information at the different scales, calculating the matching cost corresponding to the texture images collected by the at least two camera devices;
对计算出的所述匹配代价进行代价聚合计算,根据计算后的结果得到目标视差值;Perform cost aggregation calculation on the calculated matching cost, and obtain the target disparity value according to the calculated result;
根据所述标定参数和所述目标视差值,计算所述目标物体的深度。According to the calibration parameter and the target parallax value, the depth of the target object is calculated.
进一步地,所述测距程序被处理器执行时还实现如下操作:Further, when the ranging program is executed by the processor, the following operations are also implemented:
所述根据所述不同尺度下的特征信息,计算所述至少两个摄像设备采集的纹理图像之间相对应的匹配代价的步骤包括:The step of calculating the corresponding matching cost between the texture images collected by the at least two camera devices according to the feature information at the different scales includes:
根据所述不同尺度下的特征信息,以预设视差搜索范围对所述至少两个摄像设备采集的纹理图像进行逐像素遍历,计算每个像素在不同视差下的匹配代价。According to the feature information at different scales, the texture images collected by the at least two camera devices are traversed pixel by pixel within a preset parallax search range, and the matching cost of each pixel under different parallaxes is calculated.
进一步地,所述测距程序被处理器执行时还实现如下操作:Further, when the ranging program is executed by the processor, the following operations are also implemented:
所述根据所述标定参数和所述目标视差值,计算所述目标物体的深度的步骤包括:The step of calculating the depth of the target object according to the calibration parameter and the target parallax value includes:
根据所述至少两个摄像设备之间的相对距离、摄像设备焦距和目标视差值,计算每一个像素对应的物体深度值;Calculate the object depth value corresponding to each pixel according to the relative distance between the at least two imaging devices, the focal length of the imaging devices and the target parallax value;
根据所述每一个像素对应的物体深度值,计算所述目标物体的深度。Calculate the depth of the target object according to the object depth value corresponding to each pixel.
本发明计算机可读存储介质的具体实施例与上述测距方法各实施例基本相同,在此不作赘述。The specific embodiments of the computer-readable storage medium of the present invention are basically the same as the above-mentioned embodiments of the ranging method, and are not repeated here.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that, herein, the terms "comprising", "comprising" or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or system comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or system. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article or system that includes the element.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台机器人设备执行本发明各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course hardware can also be used, but in many cases the former is better implementation. Based on this understanding, the technical solutions of the present invention can be embodied in the form of software products in essence or the parts that make contributions to the prior art. The computer software products are stored in a storage medium (such as ROM/RAM) as described above. , magnetic disk, optical disk), including several instructions to make a robotic device execute the method described in each embodiment of the present invention.
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any equivalent structure or equivalent process transformation made by using the contents of the description and drawings of the present invention, or directly or indirectly applied in other related technical fields , are similarly included in the scope of patent protection of the present invention.
Claims (11)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111682866.1A CN114488103A (en) | 2021-12-31 | 2021-12-31 | Ranging system, ranging method, robot, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111682866.1A CN114488103A (en) | 2021-12-31 | 2021-12-31 | Ranging system, ranging method, robot, equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN114488103A true CN114488103A (en) | 2022-05-13 |
Family
ID=81510245
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111682866.1A Pending CN114488103A (en) | 2021-12-31 | 2021-12-31 | Ranging system, ranging method, robot, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114488103A (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120262553A1 (en) * | 2011-04-14 | 2012-10-18 | Industrial Technology Research Institute | Depth image acquiring device, system and method |
| US20160147230A1 (en) * | 2014-11-26 | 2016-05-26 | Irobot Corporation | Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems |
| CN106572340A (en) * | 2016-10-27 | 2017-04-19 | 深圳奥比中光科技有限公司 | Camera shooting system, mobile terminal and image processing method |
| CN109614889A (en) * | 2018-11-23 | 2019-04-12 | 华为技术有限公司 | Object detection method, related equipment and computer storage medium |
| CN111025317A (en) * | 2019-12-28 | 2020-04-17 | 深圳奥比中光科技有限公司 | Adjustable depth measuring device and measuring method |
| CN111724432A (en) * | 2020-06-04 | 2020-09-29 | 杭州飞步科技有限公司 | Object three-dimensional detection method and device |
-
2021
- 2021-12-31 CN CN202111682866.1A patent/CN114488103A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120262553A1 (en) * | 2011-04-14 | 2012-10-18 | Industrial Technology Research Institute | Depth image acquiring device, system and method |
| US20160147230A1 (en) * | 2014-11-26 | 2016-05-26 | Irobot Corporation | Systems and Methods for Performing Simultaneous Localization and Mapping using Machine Vision Systems |
| CN106572340A (en) * | 2016-10-27 | 2017-04-19 | 深圳奥比中光科技有限公司 | Camera shooting system, mobile terminal and image processing method |
| CN109614889A (en) * | 2018-11-23 | 2019-04-12 | 华为技术有限公司 | Object detection method, related equipment and computer storage medium |
| CN111025317A (en) * | 2019-12-28 | 2020-04-17 | 深圳奥比中光科技有限公司 | Adjustable depth measuring device and measuring method |
| CN111724432A (en) * | 2020-06-04 | 2020-09-29 | 杭州飞步科技有限公司 | Object three-dimensional detection method and device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10740920B1 (en) | Method and apparatus for combining data to construct a floor plan | |
| CN104487864B (en) | robot positioning system | |
| US10510155B1 (en) | Method and processing system for updating a first image generated by a first camera based on a second image generated by a second camera | |
| CN113916230B (en) | Systems and methods for performing simultaneous localization and mapping using a machine vision system | |
| CN108481327B (en) | A vision-enhanced positioning device, positioning method and robot | |
| KR102424135B1 (en) | Structured light matching of a set of curves from two cameras | |
| JP2011123071A (en) | Image capturing device, method for searching occlusion area, and program | |
| WO2022088611A1 (en) | Obstacle detection method and apparatus, electronic device, storage medium, and computer program | |
| CN206321237U (en) | Linear optical range finding apparatus | |
| KR20230065978A (en) | Systems, methods and media for directly repairing planar surfaces in a scene using structured light | |
| WO2019174484A1 (en) | Charging base identification method and mobile robot | |
| CN111402411A (en) | Scattered object identification and grabbing method based on line structured light | |
| CN113848944A (en) | A map construction method, device, robot and storage medium | |
| CN108544494B (en) | A positioning device, method and robot based on inertia and visual features | |
| JP5874252B2 (en) | Method and apparatus for measuring relative position with object | |
| CN116245929A (en) | Image processing method, system and storage medium | |
| CN114488103A (en) | Ranging system, ranging method, robot, equipment and storage medium | |
| KR20230012855A (en) | Method and device for real time measurement of distance from and width of objects using cameras and artificial intelligence object recognition, robot vacuum cleaners comprising the device, and movement control method for avoiding obstacles | |
| CN114903374A (en) | Sweeper and control method thereof | |
| TWI824503B (en) | Self-moving device and control method thereof | |
| TWI816387B (en) | Method for establishing semantic distance map and related mobile device | |
| CN112445208A (en) | Robot, method and device for determining travel route, and storage medium | |
| WO2024140348A1 (en) | Obstacle recognition method, apparatus, electronic device, and storage medium | |
| CN118411704A (en) | Mobile robot control method, mobile robot and storage medium | |
| CN116787428A (en) | Mobile robot safety protection method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB02 | Change of applicant information |
Country or region after: China Address after: 518000 unit 02, 2 / F, West Tower, baidu international building, 8 Haitian 1st Road, Binhai community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province Applicant after: Yunjing intelligent (Shenzhen) Co.,Ltd. Applicant after: Yunjing Intelligent Innovation (Shenzhen) Co.,Ltd. Address before: 518000 unit 02, 2 / F, West Tower, baidu international building, 8 Haitian 1st Road, Binhai community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province Applicant before: Yunjing intelligent (Shenzhen) Co.,Ltd. Country or region before: China Applicant before: YUNJING INTELLIGENCE TECHNOLOGY (DONGGUAN) Co.,Ltd. |
|
| CB02 | Change of applicant information |